Physics

quantum-roundup:-lots-of-companies-announcing-new-tech

Quantum roundup: Lots of companies announcing new tech


More superposition, less supposition

IBM follows through on its June promises, plus more trapped ion news.

IBM has moved to large-scale manufacturing of its Quantum Loon chips. Credit: IBM

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Quantum roundup: Lots of companies announcing new tech Read More »

next-generation-black-hole-imaging-may-help-us-understand-gravity-better

Next-generation black hole imaging may help us understand gravity better

Right now, we probably don’t have the ability to detect these small changes in phenomena. However, that may change, as a next-generation version of the Event Horizon Telescope is being considered, along with a space-based telescope that would operate on similar principles. So the team (four researchers based in Shanghai and CERN) decided to repeat an analysis they did shortly before the Event Horizon Telescope went operational, and consider whether the next-gen hardware might be able to pick up features of the environment around the black hole that might discriminate among different theorized versions of gravity.

Theorists have been busy, and there are a lot of potential replacements for general relativity out there. So, rather than working their way through the list, they used a model of gravity (the parametric Konoplya–Rezzolla–Zhidenko metric) that isn’t specific to any given hypothesis. Instead, it allows some of its parameters to be changed, thus allowing the team to vary the behavior of gravity within some limits. To get a sense of the sort of differences that might be present, the researchers swapped two different parameters between zero and one, giving them four different options. Those results were compared to the Kerr metric, which is the standard general relativity version of the event horizon.

Small but clear differences

Using those five versions of gravity, they model the three-dimensional environment near the event horizon using hydrodynamic simulations, including infalling matter, the magnetic fields it produces, and the jets of matter that those magnetic fields power.

The results resemble the sorts of images that the Event Horizon Telescope produced. These include a bright ring with substantial asymmetry, where one side is significantly brighter due to the rotation of the black hole. And, while the differences are subtle between all the variations of gravity, they’re there. One extreme version produced the smallest but brightest ring; another had a reduced contrast between the bright and dim side of the ring. There were also differences between the width of the jets produced in these models.

Next-generation black hole imaging may help us understand gravity better Read More »

new-quantum-hardware-puts-the-mechanics-in-quantum-mechanics

New quantum hardware puts the mechanics in quantum mechanics


As a test case, the machine was used to test a model of superconductivity.

Quantum computers based on ions or atoms have one major advantage: The qubits themselves aren’t manufactured, and there’s no device-to-device among atoms. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction.

The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?’” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry.

Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,’” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New quantum hardware puts the mechanics in quantum mechanics Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

how-easter-island’s-giant-statues-“walked”-to-their-final-platforms

How Easter Island’s giant statues “walked” to their final platforms


Workers with ropes could make the moai “walk” in zig-zag motion along roads tailor-made for the purpose.

Easter Island is famous for its giant monumental statues, called moai, built some 800 years ago and typically mounted on platforms called ahu. Scholars have puzzled over the moai on Easter Island for decades, pondering their cultural significance, as well as how a Stone Age culture managed to carve and transport statues weighing as much as 92 tons. One hypothesis, championed by archaeologist Carl Lipo of Binghamton University, among others, is that the statues were transported in a vertical position, with workers using ropes to essentially “walk” the moai onto their platforms.

The oral traditions of the people of Rapa Nui certainly include references to the moai “walking” from the quarry to their platforms, such as a song that tells of an early ancestor who made the statues walk. While there have been rudimentary field tests showing it might have been possible, the hypothesis has also generated a fair amount of criticism. So Lipo has co-authored a new paper published in the Journal of Archaeological Science offering fresh experimental evidence of “walking” moai, based on 3D modeling of the physics and new field tests to recreate that motion.

The first Europeans arrived in the 17th century and found only a few thousand inhabitants on the tiny island (just 14 by 7 miles across) thousands of miles away from any other land. In order to explain the presence of so many moai, the assumption has been that the island was once home to tens of thousands of people. But Lipo thought perhaps the feat could be accomplished with fewer workers. In 2012, Lipo and his colleague, Terry Hunt of the University of Arizona, showed that you could transport a 10-foot, 5-ton moai a few hundred yards with just 18 people and three strong ropes by employing a rocking motion.

In 2018, Lipo followed up with an intriguing hypothesis for how the islanders placed red hats on top of some moai; those can weigh up to 13 tons. He suggested the inhabitants used ropes to roll the hats up a ramp. Lipo and his team later concluded (based on quantitative spatial modeling) that the islanders likely chose the statues’ locations based on the availability of fresh water sources, per a 2019 paper in PLOS One.

The 2012 experiment demonstrated proof of principle, so why is Lipo revisiting it now? “I always felt that the [original] experiment was disconnected to some degree of theory—that we didn’t have particular expectations about numbers of people, rate of transport, road slope that could be walked, and so on,” Lipo told Ars. There were also time constraints because the attempt was being filmed for a NOVA documentary.

“That experiment was basically a test to see if we could make it happen or not,” he explained. “Fortunately, we did, and our joy in doing so is pretty well represented by our hoots and hollers when it started to walk with such limited efforts. Some of the limitation of the work was driven by the nature of TV. [The film crew] just wanted us—in just a day and half—to give it a shot. It was 4: 30 on the last day when it finally worked so we really didn’t get a lot of time to explore variability. We also didn’t have any particular predictions to test.”

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle.

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle. Credit: Carl Lipo

This time around, “We wanted to explore a bit of the physics: to show that what we did was pretty easily predicted by the physical properties of the moai—its shape, size, height, number of people on ropes, etc.—and that our success in terms of team size and rate of walking was consistent with predictions,” said Lipo. “This enables us to address one of the central critiques that always comes up: ‘Well, you did this with a 5-ton version that was 10 feet tall, but it would never work with a 30-ft-tall version that weighs 30 tons or more.'”

All about that base

You can have ahu (platforms) without moai (statues) and moai without ahu, usually along the roads leading to ahu; they were likely being transported and never got to their destination. Lipo and Hunt have amassed a database of 962 moai across the island, compiled through field surveys and photogrammetric documentation. They were particularly interested in 62 statues located along ancient transport roads that seemed to have been abandoned where they fell.

Their analysis revealed that these road moai had significantly wider bases relative to shoulder width, compared to statues mounted on platforms. This creates a stable foundation that lowers the center of mass so that the statue is more conducive to the side-to-side motion of walking transport without toppling over. Platform statues, by contrast, have shoulders wider than the base for a more top-heavy configuration.

The road moai also have a consistent and pronounced forward lean of between 6 degrees to 15 degrees from the vertical position, which moves the center of mass close to or just beyond the base’s front edge. Lipo and Hunt think this was due to careful engineering, not coincidence. It’s not conducive to stable vertical display but it is a boon during walking transport, because the forward lean causes the statue to fall forward when tilted laterally, with the rounded front base edge serving as a crucial pivot point. So every lateral rocking motion results in a forward “step.”

Per the authors, there is strong archaeological evidence that carvers modified the statues once they arrived at their platform destinations, modifying the base to eliminate the lean by removing material from the front. This shifted the center of mass over the base area for a stable upright position. The road moai even lack the carved eye sockets designed to hold white coral eyes with obsidian or red scoria for pupils—a final post-transport step once the statues had been mounted on their platforms.

Based on 3D modeling, Lipo and his team created a precisely scaled replica of one of the road moai, weighing 4.35 metric tons with the same proportions and mass distribution of the original statue. “Of course, we’d love to build a 30-foot-tall version, but the physical impossibility of doing so makes it a challenging task, nor is it entirely necessary,” said Lipo. “Through physics, we can now predict how many people it would take and how it would be done. That is key.”

Lipo's team created 3D models of moai to determine the unique characteristics that made them able to be

Lipo’s team created 3D models of moai to determine the unique characteristics that made them able to be “walked” across Rapa Nui. Credit: Carl Lipo

The new field trials required 18 people, four on each lateral rope and 10 on a rear rope, to achieve the side-to-side walking motion, and they were efficient enough in coordinating their efforts to move the statue forward 100 meters in just 40 minutes. That’s because the method operates on basic pendulum dynamics, per the authors, which minimizes friction between the base and the ground. It’s also a technique that exploits the gradual build-up of amplitude, which “suggests a sophisticated understanding of resonance principles,” Lipo and Hunt wrote.

So the actual statues could have been moved several kilometers over the course of weeks with only modest-sized crews of between 20-50 people, i.e., roughly the size of an extended family or “small lineage group” on Easter Island. Once the crew gets the statue rocking side to side—which can require between 15 to 60 people, depending on the size and weight of the moai—the resulting oscillation only needs minimal energy input from a smaller team of rope handlers to maintain that motion. They mostly provide guidance.

Lipo was not the first to test the walking hypothesis. Earlier work includes that of Czech experimental archaeologist Pavel Pavel, who conducted similar practical experiments on Easter Island in the 1980s after being inspired by Thor Heyerdahl’s Kon Tiki. (Heyerdahl even participated in the experiments.) Pavel’s team was able to demonstrate a kind of “shuffling” motion, and he concluded that just 16 men and one leader were sufficient to transport the statues.

Per Lipo and Hunt, Pavel’s demonstration didn’t result in broad acceptance of the walking hypothesis because it still required a huge amount of effort to tilt the statue, producing more of a twisting motion rather than efficient forward movement. This would only have moved a large statue 100 meters a day under ideal conditions. The base was also likely to be damaged from friction with the ground. Lipo and Hunt maintain this is because Pavel (and others who later tried to reproduce his efforts) used the wrong form of moai for those earlier field tests: those erected on the platforms, already modified for vertical stability and permanent display, and not the road moai with shapes more conducive to vertical transport.

“Pavel deserves recognition for taking oral traditions seriously and challenging the dominant assumption of horizontal transport, a move that invited ridicule from established scholars,” Lipo and Hunt wrote. “His experiments suggested that vertical transport was feasible and consistent with cultural memory. Our contribution builds on this by showing that ancestral engineers intentionally designed statues for walking. Those statues were later modified to stand erect on ceremonial platforms, a transformation that effectively erased the morphological features essential for movement.”

The evidence of the roadways

Lipo and Hunt also analyzed the roadways, noting that these ancient roadbeds had concave cross sections that would have been problematic for moving the statues horizontally using wooden rollers or frames perpendicular to those roads. But that concave shape would help constrain rocking movement during vertical transport. And the moai roads were remarkably level with slopes of, on average, 2–3 percent. For the occasional steeper slopes, such as walking a moai up a ramp to the top of an ahu, Lipo and Hunt’s field experiments showed that these could be navigated successfully through controlled stepping.

Furthermore, the distribution pattern of the roadways is consistent with the road moai being left due to mechanical failure. “Arguments that the moai were placed ceremonially in preparation for quarrying have become more common,” said Lipo. “The algorithm there is to claim that positions are ritual, without presenting anything that is falsifiable. There is no reason why the places the statues fell due to mechanical reasons couldn’t later become ‘ritual,’ in the same way that everything on the island could be claimed to be ritual—a circular argument. But to argue that they were placed there purposefully for ritual purposes demands framing the explanation in a way that is falsifiable.”

Schematic representation of the moai transport method using coordinated rope pulling to achieve a

Schematic representation of the moai transport method using coordinated rope pulling to achieve a “walking” motion. Credit: Carl Lipo and Terry Hunt, 2025

“The only line of evidence that is presented in this way is the presence of ‘platforms’ that were found beneath the base of one moai, which is indeed intriguing,” Lipo continued. “However, those platforms can be explained in other ways, given that the moai certainly weren’t moved from the quarry to the ahu in one single event. They were paused along the way, as is clear from the fact that the roads appear to have been constructed in segments with different features. Their construction appears to be part of the overall transport process.”

Lipo’s work has received a fair share of criticism from other scholars over the years, and his and Hunt’s paper includes a substantial section rebutting the most common of those critiques. “Archaeologists tend to reject (in practice) the idea that the discipline can construct cumulative knowledge,” said Lipo. “In the case of moai transport, we’ve strived to assemble as much empirical evidence as possible and have forwarded an explanation that best accounts for what we can observe. Challenges to these ideas, however, do not come from additional studies with new data but rather just new assertions.”

“This leads the public to believe that we (as a discipline) can never really figure anything out and are always going to be a speculative enterprise, spinning yarns and arguing with each other,” Lipo continued. “With the erosion of trust in science, this is fairly catastrophic to archaeology as a whole but also the whole scientific enterprise. Summarizing the results in the way we do here is an attempt to point out that we can build falsifiable accounts and can make contributions to cumulative knowledge that have empirical consequences—even with something as remarkable as the transport of moai.”

Experimental archaeology is a relatively new field that some believe could be the future of archaeology. “I think experimental archaeology has potential when it’s tied to physics and chemistry,” said Lipo. “It’s not just recreating something and then arguing it was done in the same way in the past. Physics and chemistry are our time machines, allowing us to explain why things are the way they are in the present in terms of the events that occurred in the past. The more we can link the theory needed to explain the present, the better we can explain the past.”

DOI: Journal of Archaeological Science, 2025. 10.1016/j.jas.2025.106383  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How Easter Island’s giant statues “walked” to their final platforms Read More »

floating-electrons-on-a-sea-of-helium

Floating electrons on a sea of helium

By now, a handful of technologies are leading contenders for producing a useful quantum computer. Companies have used them to build machines with dozens to hundreds of qubits, the error rates are coming down, and they’ve largely shifted from worrying about basic scientific problems to dealing with engineering challenges.

Yet even at this apparently late date in the field’s development, there are companies that are still developing entirely new qubit technologies, betting the company that they have identified something that will let them scale in ways that enable a come-from-behind story. Recently, one of those companies published a paper that describes the physics of their qubit system, which involves lone electrons floating on top of liquid helium.

Trapping single electrons

So how do you get an electron to float on top of helium? To find out, Ars spoke with Johannes Pollanen, the chief scientific officer of EeroQ, the company that accomplished the new work. He said that it’s actually old physics, with the first demonstrations of it having been done half a century ago.

“If you bring a charged particle like an electron near the surface, because the helium is dielectric, it’ll create a small image charge underneath in the liquid,” said Pollanen. “A little positive charge, much weaker than the electron charge, but there’ll be a little positive image there. And then the electron will naturally be bound to its own image. It’ll just see that positive charge and kind of want to move toward it, but it can’t get to it, because the helium is completely chemically inert, there are no free spaces for electrons to go.”

Obviously, to get the helium liquid in the first place requires extremely low temperatures. But it can actually remain liquid up to temperatures of 4 Kelvin, which doesn’t require the extreme refrigeration technologies needed for things like transmons. Those temperatures also provide a natural vacuum, since pretty much anything else will also condense out onto the walls of the container.

Diagrams of a chip showing channels and electrodes, along with an image of the chip itself.

The chip itself, along with diagrams of its organization. The trap is set by the gold electrode on the left. Dark channels allow liquid helium and electrons to flow into and out of the trap. And the bluish electrodes at the top and bottom read the presence of the electrons. Credit: EeroQ

Liquid helium is also a superfluid, meaning it flows without viscosity. This allows it to easily flow up tiny channels cut into the surface of silicon chips that the company used for its experiments. A tungsten filament next to the chip was used to load the surface of the helium with electrons at what you might consider the equivalent of a storage basin.

Floating electrons on a sea of helium Read More »

2025-nobel-prize-in-physics-awarded-for-macroscale-quantum-tunneling

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling


John Clarke, Michel H. Devoret, and John Martinis built an electrical circuit-based oscillator on a microchip.

A device consisting of four transmon qubits, four quantum buses, and four readout resonators fabricated by IBM in 2017. Credit: ay M. Gambetta, Jerry M. Chow & Matthias Steffen/CC BY 4.0

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis “for the discovery of macroscopic quantum tunneling and energy quantization in an electrical circuit.” The Nobel committee said during a media briefing that the laureates’ work provides opportunities to develop “the next generation of quantum technology, including quantum cryptography, quantum computers, and quantum sensors.” The three men will split the $1.1 million (11 million Swedish kronor) prize money. The presentation ceremony will take place in Stockholm on December 10, 2025.

“To put it mildly, it was the surprise of my life,” Clarke told reporters by phone during this morning’s press conference. “Our discovery in some ways is the basis of quantum computing. Exactly at this moment where this fits in is not entirely clear to me. One of the underlying reasons that cellphones work is because of all this work.”

When physicists began delving into the strange new realm of subatomic particles in the early 20th century, they discovered a realm where the old, deterministic laws of classical physics no longer apply. Instead, uncertainty reigns supreme. It is a world governed not by absolutes, but by probabilities, where events that would seem impossible on the macroscale occur on a regular basis.

For instance, subatomic particles can “tunnel” through seemingly impenetrable energy barriers. Imagine that an electron is a water wave trying to surmount a tall barrier. Unlike water, if the electron’s wave is shorter than the barrier, there is still a small probability that it will seep through to the other side.

This neat little trick has been experimentally verified many times. In the 1950s, physicists devised a system in which the flow of electrons would hit an energy barrier and stop because they lacked sufficient energy to surmount that obstacle. But some electrons didn’t follow the established rules of behavior. They simply tunneled right through the energy barrier.

(l-r): John Clarke, Michel H. Devoret and John M. Martinis

(l-r): John Clarke, Michel H. Devoret, and John M. Martinis. Credit: Niklas Elmehed/Nobel Prize Outreach

From subatomic to the macroscale

Clarke, Devoret, and Martinis were the first to demonstrate that quantum effects, such as quantum tunneling and energy quantization, can operate on macroscopic scales, not just one particle at a time.

After earning his PhD from the University of Cambridge, Clarke came to the University of California, Berkeley, as a postdoc, eventually joining the faculty in 1969. By the mid-1980s, Devoret and Martinis had joined Clarke’s lab as a postdoc and graduate student, respectively. The trio decided to look for evidence of macroscopic quantum tunneling using a specialized circuit called a Josephson junction—a macroscopic device that takes advantage of a tunneling effect that is now widely used in quantum computing, quantum sensing, and cryptography.

A Josephson junction—named after British physicist Brian Josephson, who won the 1973 Nobel Prize in physics—is basically two semiconductor pieces separated by an insulating barrier. Despite this small gap between the two conductors, electrons can still tunnel through the insulator and create a current. That occurs at sufficiently low temperatures, when the junction becomes superconducting as electrons form so-called “Cooper pairs.”

The team built an electrical circuit-based oscillator on a microchip measuring about 1 centimeter in size—essentially a quantum version of the classic pendulum. Their biggest challenge was figuring out how to reduce the noise in their experimental apparatus. For their experiments, they first fed a weak current into the junction and measured the voltage—initially zero. Then they increased the current and measured how long it took for the system to tunnel out of its enclosed state to produce a voltage.

Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

They took many measurements and found that the average current increases as the device’s temperature falls, as expected. But at some point, the temperature got so low that the device became superconducting and the average current became independent of the device’s temperature—a telltale signature of macroscopic quantum tunneling.

The team also demonstrated that the Josephson junction exhibited quantized energy levels—meaning the energy of the system was limited to only certain allowed values, just like subatomic particles can gain or lose energy only in fixed, discrete amounts—confirming the quantum nature of the system. Their discovery effectively revolutionized quantum science, since other scientists could now test precise quantum physics on silicon chips, among other applications.

Lasers, superconductors, and superfluid liquids exhibit quantum mechanical effects at the macroscale, but these arise by combining the behavior of microscopic components. Clarke, Devoret, and Martinis were able to create a macroscopic effect—a measurable voltage—from a macroscopic state. Their system contained billions of Cooper pairs filling the entire superconductor on the chip, yet all of them were described by a single wave function. They behave like a large-scale artificial atom.

In fact, their circuit was basically a rudimentary qubit. Martinis showed in a subsequent experiment that such a circuit could be an information-bearing unit, with the lowest energy state and the first step upward functioning as a 0 and a 1, respectively. This paved the way for such advances as the transmon in 2007: a superconducting charge qubit with reduced sensitivity to noise.

“That quantization of the energy levels is the source of all qubits,” said Irfan Siddiqi, chair of UC Berkeley’s Department of Physics and one of Devoret’s former postdocs. “This was the grandfather of qubits. Modern qubit circuits have more knobs and wires and things, but that’s just how to tune the levels, how to couple or entangle them. The basic idea that Josephson circuits could be quantized and were quantum was really shown in this experiment. The fact that you can see the quantum world in an electrical circuit in this very direct way was really the source of the prize.”

So perhaps it is not surprising that Martinis left academia in 2014 to join Google’s quantum computing efforts, helping to build a quantum computer the company claimed had achieved “quantum supremacy” in 2019. Martinis left in 2020 and co-founded a quantum computing startup, Qolab, in 2022. His fellow Nobel laureate, Devoret, now leads Google’s quantum computing division and is also a faculty member at the University of California, Santa Barbara. As for Clarke, he is now a professor emeritus at UC Berkeley.

“These systems bridge the gap between microscopic quantum behavior and macroscopic devices that form the basis for quantum engineering,” Gregory Quiroz, an expert in quantum information science and quantum algorithms at Johns Hopkins University, said in a statement. “The rapid progress in this field over the past few decades—in part fueled by their critical results—has allowed superconducting qubits to go from small-scale laboratory experiments to large-scale, multi-qubit devices capable of realizing quantum computation. While we are still on the hunt for undeniable quantum advantage, we would not be where we are today without many of their key contributions to the field.”

As is often the case with fundamental research, none of the three physicists realized at the time how significant their discovery would be in terms of its impact on quantum computing and other applications.

“This prize really demonstrates what the American system of science has done best,” Jonathan Bagger, CEO of the American Physical Society, told The New York Times. “It really showed the importance of the investment in research for which we do not yet have an application, because we know that sooner or later, there will be an application.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling Read More »

meet-the-2025-ig-nobel-prize-winners

Meet the 2025 Ig Nobel Prize winners


The annual award ceremony features miniature operas, scientific demos, and the 24/7 lectures.

The Ig Nobel Prizes honor “achievements that first make people laugh and then make them think.” Credit: Aurich Lawson / Getty Images

Does alcohol enhance one’s foreign language fluency? Do West African lizards have a preferred pizza topping? And can painting cows with zebra stripes help repel biting flies? These and other unusual research questions were honored tonight in a virtual ceremony to announce the 2025 recipients of the annual Ig Nobel Prizes. Yes, it’s that time of year again, when the serious and the silly converge—for science.

Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor “achievements that first make people laugh and then make them think.” The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures whereby experts must explain their work twice: once in 24 seconds and the second in just seven words.

Acceptance speeches are limited to 60 seconds. And as the motto implies, the research being honored might seem ridiculous at first glance, but that doesn’t mean it’s devoid of scientific merit. In the weeks following the ceremony, the winners will also give free public talks, which will be posted on the Improbable Research website.

Without further ado, here are the winners of the 2025 Ig Nobel prizes.

Biology

Example of the area of legs and body used to count biting flies on cows.

Credit: Tomoki Kojima et al., 2019

Citation: Tomoki Kojima, Kazato Oishi, Yasushi Matsubara, Yuki Uchiyama, Yoshihiko Fukushima, Naoto Aoki, Say Sato, Tatsuaki Masuda, Junichi Ueda, Hiroyuki Hirooka, and Katsutoshi Kino, for their experiments to learn whether cows painted with zebra-like striping can avoid being bitten by flies.

Any dairy farmer can tell you that biting flies are a pestilent scourge for cattle herds, which is why one so often sees cows throwing their heads, stamping their feet, flicking their tails, and twitching their skin—desperately trying to shake off the nasty creatures. There’s an economic cost as well since it causes the cattle to graze and feed less, bed down for shorter times, and start bunching together, which increases heat stress and risks injury to the animals. That results in less milk yield for dairy cows and less beef yields from feedlot cattle.

You know who isn’t much bothered by biting flies? The zebra. Scientists have long debated the function of the zebra’s distinctive black-and-white striped pattern. Is it for camouflage? Confusing potential predators? Or is it to repel those pesky flies? Tomoki Kojima et al. decided to put the latter hypothesis to the test, painting zebra stripes on six pregnant Japanese black cows at the Aichi Agricultural Research Center in Japan. They used water-borne lacquers that washed away after a few days, so the cows could take turns being in three different groups: zebra stripes, just black stripes, or no stripes (as a control).

The results: the zebra stripes significantly decreased both the number of biting flies on the cattle and the animals’ fly-repelling behaviors compared to those with black stripes or no stripes. The one exception was for skin twitching—perhaps because it is the least energy intensive of those behaviors. Why does it work? The authors suggest it might have something to do with modulation brightness or polarized light that confuses the insects’ motion detection system, used to control their approach when landing on a surface. But that’s a topic for further study.

Chemistry

Freshly cooked frozen w:blintzes in a non-stick frying pan coated with Teflon

Credit: Andrevan/CC BY-SA 2.5

Citation: Rotem Naftalovich, Daniel Naftalovich, and Frank Greenway, for experiments to test whether eating Teflon [a form of plastic more formally called “polytetrafluoroethylene”] is a good way to increase food volume and hence satiety without increasing calorie content.

Diet sodas and other zero-calorie drinks are a mainstay of the modern diet, thanks to the development of artificial sweeteners whose molecules can’t be metabolized by the human body. The authors of this paper are intrigued by the notion of zero-calorie foods, which they believe could be achieved by increasing the satisfying volume and mass of food without increasing the calories. And they have just the additive for that purpose: polytetrafluoroethylene (PTFE), more commonly known as Teflon.

Yes, the stuff they use on nonstick cookware. They insist that Teflon is inert, heat-resistant, impervious to stomach acid, tasteless, cost-effective, and available in handy powder form for easy mixing into food. They recommend a ratio of three parts food to one part Teflon powder.

The authors understand that to the average layperson, this is going to sound like a phenomenally bad idea—no thank you, I would prefer not to have powdered Teflon added to my food. So they spend many paragraphs citing all the scientific studies on the safety of Teflon—it didn’t hurt rats in feeding trials!—as well as the many applications for which it is already being used. These include Teflon-coated stirring rods used in labs and coatings on medical devices like bladder catheters and gynecological implants, as well as the catheters used for in vitro fertilization. And guys, you’ll be happy to know that Teflon doesn’t seem to affect sperm motility or viability. I suspect this will still be a hard sell in the consumer marketplace.

Physics

Cacio e pepe is an iconic pasta dish that is also frustratingly difficult to make

Credit: Simone Frau

Citation: Giacomo Bartolucci, Daniel Maria Busiello, Matteo Ciarchi, Alberto Corticelli, Ivan Di Terlizzi, Fabrizio Olmeda, Davide Revignas, and Vincenzo Maria Schimmenti, for discoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can be a cause of unpleasantness.

“Pasta alla cacio e pepe” is a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. The dish is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy. As we reported in April, Italian physicists came to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.” If one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce becomes stiff and unappetizing as it cools. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

Engineering Design

Experimental set-up (a) cardboard enclosure (b) UV-C tube light (c) SMPS

Credit: Vikash Kumar and Sarthak Mittal

Citation: Vikash Kumar and Sarthak Mittal, for analyzing, from an engineering design perspective, “how foul-smelling shoes affects the good experience of using a shoe-rack.”

Shoe odor is a universal problem, even in India, according to the authors of this paper, who hail from Shiv Nadar University (SNU) in Uttar Pradesh. All that heat and humidity means people perspire profusely when engaging even in moderate physical activity. Add in a lack of proper ventilation and washing, and shoes become a breeding ground for odor-causing bacteria called Kytococcus sedentarius. Most Indians make use of shoe racks to store their footwear, and the odors can become quite intense in that closed environment.

Yet nobody has really studied the “smelly shoe” problem when it comes to shoe racks. Enter Kumar and Mittal, who conducted a pilot study with the help of 149 first-year SNU students. More than half reported feeling uncomfortable about their own or someone else’s smelly shoes, and 90 percent kept their shoes in a shoe rack. Common methods to combat the odor included washing the shoes and drying them in the sun; using spray deodorant; or sprinkling the shoes with an antibacterial powder. They were unaware of many current odor-combatting products on the market, such as tea tree and coconut oil solutions, thyme oil, or isopropyl alcohol.

Clearly, there is an opportunity to make a killing in the odor-resistant shoe rack market. So naturally Kumar and Mittal decided to design their own version. They opted to use bacteria-killing UV rays (via a UV-C tube light) as their built-in “odor eater,” testing their device on the shoes of several SNU athletes, “which had a very strong noticeable odor.” They concluded that an exposure time of two to three minutes was sufficient to kill the bacteria and get rid of the odor.

Aviation

Wing membranes (patagia) of Townsend's big-eared bat, Corynorhinus townsendii

Credit: Public domain

Citation: Francisco Sánchez, Mariana Melcón, Carmi Korine, and Berry Pinshow, for studying whether ingesting alcohol can impair bats’ ability to fly and also their ability to echolocate.

Nature is rife with naturally occurring ethanol, particularly from ripening fruit, and that fruit in turn is consumed by various microorganisms and animal species. There are occasional rare instances of some mammals, birds, and even insects consuming fruit rich in ethanol and becoming intoxicated, making those creatures more vulnerable to potential predators or more accident-prone due to lessened motor coordination. Sánchez et al. decided to look specifically at the effects of ethanol on Egyptian fruit bats, which have been shown to avoid high-ethanol fruit. The authors wondered if this might be because the bats wanted to avoid becoming inebriated.

They conducted their experiments on adult male fruit bats kept in an outdoor cage that served as a long flight corridor. The bats were given liquid food with varying amounts of ethanol and then released in the corridor, with the authors timing how long it took each bat to fly from one end to the other. A second experiment followed the same basic protocol, but this time the authors recorded the bats’ echolocation calls with an ultrasonic microphone. The results: The bats that received liquid food with the highest ethanol content took longer to fly the length of the corridor, evidence of impaired flight ability. The quality of those bats’ echolocation was also adversely affected, putting them at a higher risk of colliding with obstacles mid-flight.

Psychology

Narcissus (1597–99) by Caravaggio; the man in love with his own reflection

Credit: Public domain

Citation: Marcin Zajenkowski and Gilles Gignac, for investigating what happens when you tell narcissists—or anyone else—that they are intelligent.

Not all narcissists are created equal. There are vulnerable narcissists who tend to be socially withdrawn, have low self-esteem, and are prone to negative emotions. And then there are grandiose narcissists, who exhibit social boldness, high self-esteem, and are more likely to overestimate their own intelligence. The prevailing view is that this overconfidence stems from narcissism. The authors wanted to explore whether this effect might also work in reverse, i.e., that believing one has superior intelligence due to positive external feedback can lead to at least a temporary state of narcissism.

Zajenkowski et al. recruited 361 participants from Poland who were asked to rate their level of intelligence compared to other people; complete the Polish version of the Narcissistic Personality Inventory; and take an IQ test to compare their perceptions of their own intelligence with an objective measurement. The participants were then randomly assigned to one of two groups. One group received positive feedback—telling them they did indeed have a higher IQ than most people—while the other received negative feedback.

The results confirmed most of the researchers’ hypotheses. In general, participants gave lower estimates of their relative intelligence after completing the IQ test, which provided an objective check of sorts. But the type of feedback they received had a measurable impact. Positive feedback enhanced their feelings of uniqueness (a key aspect of grandiose narcissism). Those who received negative feedback rated their own intelligence as being lower, and that negative feedback had a larger effect than positive feedback. The authors concluded that external feedback helped shape the subjects’ perception of their own intelligence, regardless of the accuracy of that feedback.

Nutrition

Rainbow lizards eating ‘four cheese’ pizza at a seaside touristic resort in Togo.

Credit: Daniele Dendi et al, 2022

Citation: Daniele Dendi, Gabriel H. Segniagbeto, Roger Meek, and Luca Luiselli, for studying the extent to which a certain kind of lizard chooses to eat certain kinds of pizza.

Move over, Pizza Rat, here come the Pizza Lizards—rainbow lizards, to be precise. This is a species common to urban and suburban West Africa. The lizards primarily live off insects and arthropods, but their proximity to humans has led to some developing a more omnivorous approach to their foraging. Bread is a particular favorite. Case in point: One fine sunny day at a Togo seaside resort, the authors noticed a rainbow lizard stealing a tourist’s slice of four-cheese pizza and happily chowing down.

Naturally, they wanted to know if this was an isolated incident or whether the local rainbow lizards routinely feasted on pizza slices. And did the lizards have a preferred topping? Inquiring minds need to know. So they monitored the behavior of nine particular lizards, giving them the choice between a plate of four-cheese pizza and a plate of “four seasons” pizza, spaced about 10 meters apart.

It only took 15 minutes for the lizards to find the pizza and eat it, sometimes fighting over the remaining slices. But they only ate the four-cheese pizza. For the authors, this suggests there might be some form of chemical cues that attract them to the cheesy pizzas, or perhaps it’s easier for them to digest. I’d love to see how the lizards react to the widely derided Canadian bacon and pineapple pizza.

Pediatrics

Pumped breast milk in bottles

Citation: Julie Mennella and Gary Beauchamp, for studying what a nursing baby experiences when the baby’s mother eats garlic.

Mennella and Beauchamp designed their experiment to investigate two questions: whether the consumption of garlic altered the odor of a mother’s breast milk, and if so, whether those changes affected the behavior of nursing infants. (Garlic was chosen because it is known to produce off flavors in dairy cow milk and affect human body odor.) They recruited eight women who were exclusively breastfeeding their infants, taking samples of their breast milk over a period when the participants abstained from eating sulfurous foods (garlic, onion, asparagus), and more samples after the mothers consumed either a garlic capsule or a placebo.

The results: Mothers who ingested the garlic capsules produced milk with a perceptibly more intense odor, as evaluated by several adult panelists brought in to sniff the breast milk samples. The strong odor peaked at two hours after ingestion and decreased fats, which is consistent with prior research on cows that ingested highly odorous feeds. As for the infants, those whose mothers ingested garlic attached to the breast for longer periods and sucked more when the milk smelled like garlic. This could be relevant to ongoing efforts to determine whether sensory experiences during breastfeeding can influence how readily infants accept new foods upon weaning, and perhaps even their later food preferences.

Literature

closeup of a hand with clubbed fingernails

Credit: William B. Bean

Citation: The late Dr. William B. Bean, for persistently recording and analyzing the rate of growth of one of his fingernails over a period of 35 years.

If you’re surprised to see a study on fingernail growth rates under the Literature category, it will all make sense once you read the flowery prose stylings of Dr. Bean. He really did keep detailed records of how fast his fingernails grew for 35 years, claiming in his final report that “the nail provides a slowly moving keratin kymograph that measures age on the inexorable abscissa of time.” He sprinkles his observations with ponderous references to medieval astrology, James Boswell, and Moby Dick, with a dash of curmudgeonly asides bemoaning the sterile modern medical teaching methods that permeate “the teeming mass of hope and pain, technical virtuosity, and depersonalization called a ‘health center.'”

So what did our pedantic doctor discover in those 35 years, not just studying his own nails, but meticulously reviewing all the available scientific literature? Well, for starters, the rate of fingernail growth diminishes as one ages; Bean noted that his growth rates remained steady early on, but “slowed down a trifle” over the last five years of his project. Nails grow faster in children than adults. A warm environment can also accelerate growth, as does biting one’s fingernails—perhaps, he suggests, because the biting stimulates blood flow to the area. And he debunks the folklore of hair and nails growing even after death: it’s just the retraction and contraction of the skin post-mortem that makes it seem like the nails are growing.

Peace

Citation: Fritz Renner, Inge Kersbergen, Matt Field, and Jessica Werthmann, for showing that drinking alcohol sometimes improves a person’s ability to speak in a foreign language.

Alcohol is well-known to have detrimental effects on what’s known in psychological circles as “executive functioning,” impacting things like working memory and inhibitory control. But there’s a widespread belief among bilingual people that a little bit of alcohol actually improves one’s fluency in a foreign language, which also relies on executive functioning. So wouldn’t being intoxicated actually have an adverse effect on foreign language fluency? Renner et al. decided to investigate further.

They recruited 50 native German-speaking undergrad psychology students at Maastricht University in the Netherlands who were also fluent in Dutch. They were randomly divided into two groups. One group received an alcoholic drink (vodka with bitter lemon), and the other received water. Each participant consumed enough to be slightly intoxicated after 15 minutes, and then engaged in a discussion in Dutch with a native Dutch speaker. Afterward, they were asked to rate their self-perception of their skill at Dutch, with the Dutch speakers offering independent observer ratings.

The researchers were surprised to find that intoxication improved the participants’ Dutch fluency, based on the independent observer reports. (Self-evaluations were largely unaffected by intoxication levels.) One can’t simply attribute this to so-called “Dutch courage,” i.e., increased confidence associated with intoxication. Rather, the authors suggest that intoxication lowers language anxiety, thereby increasing one’s foreign language proficiency, although further research would be needed to support that hypothesis.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Meet the 2025 Ig Nobel Prize winners Read More »

scientists-unlock-secret-to-thick,-stable-beer-foams

Scientists unlock secret to thick, stable beer foams

For many beer lovers, a nice thick head of foam is one of life’s pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

As previously reported, foams are ubiquitous in everyday life, found in foods (whipped cream), beverages (beer, cappuccino), shaving cream and hair-styling mousse, packing peanuts, building insulation, flame-retardant materials, and so forth. All foams are the result of air being beaten into a liquid formula that contains some kind of surfactant (active surface agent), usually fats or proteins in edible foams, or chemical additives in non-edible products. That surfactant strengthens the liquid film walls of the bubbles to keep them from collapsing.

Individual bubbles typically form a sphere because that’s the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble’s shape is that many bubbles can then tightly pack together to form a foam. But bubbles “coarsen” over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This “jamming” is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as “collective bubble collapse,” or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called “propagating mode,” in which a broken bubble is absorbed into the liquid film, and a “penetrating mode,” in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Scientists unlock secret to thick, stable beer foams Read More »

physics-of-badminton’s-new-killer-spin-serve

Physics of badminton’s new killer spin serve

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It’s so effective—some have called it “impossible to return“—that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn’t meant to quash innovation but to address players’ concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn’t become the test bed for the technique, which is markedly similar to the previously banned “Sidek serve.” The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

Shuttlecocks are unique among the various projectiles used in different sports due to their open conical shape. Sixteen overlapping feathers protrude from a rounded cork base that is usually covered in thin leather. The birdies one uses for leisurely backyard play might be synthetic nylon, but serious players prefer actual feathers.

Those overlapping feathers give rise to quite a bit of drag, such that the shuttlecock will rapidly decelerate as it travels and its parabolic trajectory will fall at a steeper angle than its rise. The extra drag also means that players must exert quite a bit of force to hit a shuttlecock the full length of a badminton court. Still, shuttlecocks can achieve top speeds of more than 300 mph. The feathers also give the birdie a slight natural spin around its axis, and this can affect different strokes. For instance, slicing from right to left, rather than vice versa, will produce a better tumbling net shot.

Chronophotographies of shuttlecocks after an impact with a racket

Chronophotographies of shuttlecocks after an impact with a racket. Credit: Caroline Cohen et al., 2015

The cork base makes the birdie aerodynamically stable: No matter how one orients the birdie, once airborne, it will turn so that it is traveling cork-first and will maintain that orientation throughout its trajectory. A 2015 study examined the physics of this trademark flip, recording flips with high-speed video and conducting free-fall experiments in a water tank to study how its geometry affects the behavior. The latter confirmed that shuttlecock feather geometry hits a sweet spot in terms of an opening inclination angle that is neither too small nor too large. And they found that feather shuttlecocks are indeed better than synthetic ones, deforming more when hit to produce a more triangular trajectory.

Physics of badminton’s new killer spin serve Read More »

ice-discs-slingshot-across-a-metal-surface-all-on-their-own

Ice discs slingshot across a metal surface all on their own


VA Tech experiment was inspired by Death Valley’s mysterious “sailing stones” at Racetrack Playa.

Graduate student Jack Tapocik sets up ice on an engineered surface in the VA Tech lab of Jonathan Boreyko. Credit: Alex Parrish/Virginia Tech

Scientists have figured out how to make frozen discs of ice self-propel across a patterned metal surface, according to a new paper published in the journal ACS Applied Materials and Interfaces. It’s the latest breakthrough to come out of the Virginia Tech lab of mechanical engineer Jonathan Boreyko.

A few years ago, Boreyko’s lab experimentally demonstrated a three-phase Leidenfrost effect in water vapor, liquid water, and ice. The Leidenfrost effect is what happens when you dash a few drops of water onto a very hot, sizzling skillet. The drops levitate, sliding around the pan with wild abandon. If the surface is at least 400° Fahrenheit (well above the boiling point of water), cushions of water vapor, or steam, form underneath them, keeping them levitated. The effect also works with other liquids, including oils and alcohol, but the temperature at which it manifests will be different.

Boreyko’s lab discovered that this effect can also be achieved in ice simply by placing a thin, flat disc of ice on a heated aluminum surface. When the plate was heated above 150° C (302° F), the ice did not levitate on a vapor the way liquid water does. Instead, there was a significantly higher threshold of 550° Celsius (1,022° F) for levitation of the ice to occur. Unless that critical threshold is reached, the meltwater below the ice just keeps boiling in direct contact with the surface. Cross that critical point and you will get a three-phase Leidenfrost effect.

The key is a temperature differential in the meltwater just beneath the ice disc. The bottom of the meltwater is boiling, but the top of the meltwater sticks to the ice. It takes a lot to maintain such an extreme difference in temperature, and doing so consumes most of the heat from the aluminum surface, which is why it’s harder to achieve levitation of an ice disc. Ice can suppress the Leidenfrost effect even at very high temperatures (up to 550° C), which means that using ice particles instead of liquid droplets would be better for many applications involving spray quenching: rapid cooling in nuclear power plants, for example, firefighting, or rapid heat quenching when shaping metals.

This time around, Boreyko et al. have turned their attention to what the authors term “a more viscous analog” to a Leidenfrost ratchet, a form of droplet self-propulsion. “What’s different here is we’re no longer trying to levitate or even boil,” Boreyko told Ars. “Now we’re asking a more straightforward question: Is there a way to make ice move across the surface directionally as it is melting? Regular melting at room temperature. We’re not boiling, we’re not levitating, we’re not Leidenfrosting. We just want to know, can we make ice shoot across the surface if we design a surface in the right way?”

Mysterious moving boulders

The researchers were inspired by Death Valley’s famous “sailing stones” on Racetrack Playa. Watermelon-sized boulders are strewn throughout the dry lake bed, and they leave trails in the cracked earth as they slowly migrate a couple of hundred meters each season. Scientists didn’t figure out what was happening until 2014. Although co-author Ralph Lorenz (Johns Hopkins University) admitted he thought theirs would be “the most boring experiment ever” when they first set it up in 2011, two years later, the boulders did indeed begin to move while the playa was covered with a pond of water a few inches deep.

So Lorenz and his co-authors were finally able to identify the mechanism. The ground is too hard to absorb rainfall, and that water freezes when the temperature drops. When temperatures rise above freezing again, the ice starts to melt, creating ice rafts floating on the meltwater. And when the winds are sufficiently strong, they cause the ice rafts to drift along the surface.

A sailing stone in Death Valley's Racetrack Playa.

A sailing stone at Death Valley’s Racetrack Playa. Credit: Tahoenathan/CC BY-SA 3.0

“Nature had to have wind blowing to kind of push the boulder and the ice along the meltwater that was beneath the ice,” said Boreyko. “We thought, what if we could have a similar idea of melting ice moving directionally but use an engineered structure to make it happen spontaneously so we don’t have to have energy or wind or anything active to make it work?”

The team made their ice discs by pouring distilled water into thermally insulated polycarbonate Petrie dishes. This resulted in bottom-up freezing, which minimizes air bubbles in the ice. They then milled asymmetric grooves into uncoated aluminum plates in a herringbone pattern—essentially creating arrowhead-shaped channels—and then bonded them to hot plates heated to the desired temperature. Each ice disc was placed on the plate with rubber tongs, and the experiments were filmed from various angles to fully capture the disc behavior.

The herringbone pattern is the key. “The directionality is what really pushes the water,” Jack Tapocik, a graduate student in Boreyko’s lab, told Ars. “The herringbone doesn’t allow for water to flow backward, the water has to go forward, and that basically pushes the water and the ice together forward. We don’t have a treated surface, so the water just sits on top and the ice all moves as one unit.”

Boreyko draws an analogy to tubing on a river, except it’s the directional channels rather than gravity causing the flow. “You can see [in the video below] how it just follows the meltwater,” he said. “This is your classic entrainment mechanism where if the water flows that way and you’re floating on the water, you’re going to go the same way, too. It’s basically the same idea as what makes a Leidenfrost droplet also move one way: It has a vapor flow underneath. The only difference is that was a liquid drifting on a vapor flow, whereas now we have a solid drifting on a liquid flow. The densities and viscosities are different, but the idea is the same: You have a more dense phase that is drifting on the top of a lighter phase that is flowing directionally.”

Jonathan Boreyko/Virginia Tech

Next, the team repeated the experiment, this time coating the aluminum herringbone surface with water-repellant spray, hoping to speed up the disc propulsion. Instead, they found that the disc ended up sticking to the treated surface for a while before suddenly slingshotting across the metal plate.

“It’s a totally different concept with totally different physics behind it, and it’s so much cooler,” said Tapocik. “As the ice is melting on these coated surfaces, the water just doesn’t want to sit within the channels. It wants to sit on top because of the [hydrophobic] coating we have on there. The ice is directly sticking now to the surface, unlike before when it was floating. You get this elongated puddle in front. The easiest place [for the ice] to be is in the center of this giant, long puddle. So it re-centers, and that’s what moves it forward like a slingshot.”

Essentially, the water keeps expanding asymmetrically, and that difference in shape gives rise to a mismatch in surface tension because the amount of force that surface tension exerts on a body depends on curvature. The flatter puddle shape in front has less curvature than the smaller shape in back. As the video below shows, when the mismatch in surface tension becomes sufficiently strong, “It just rips the ice off the surface and flings it along,” said Boreyko. “In the future, we could try putting little things like magnets on top of the ice. We could probably put a boulder on it if we wanted to. The Death Valley effect would work with or without a boulder because it’s the floating ice raft that moves with the wind.”

Jonathan Boreyko/Virginia Tech

One potential application is energy harvesting. For example, one could pattern the metal surface in a circle rather than a straight line so the melting ice disk would continually rotate. Put magnets on the disk, and they would also rotate and generate power. One might even attach a turbine or gear to the rotating disc.

The effect might also provide a more energy-efficient means of defrosting, a longstanding research interest for Boreyko. “If you had a herringbone surface with a frosting problem, you could melt the frost, even partially, and use these directional flows to slingshot the ice off the surface,” he said. “That’s both faster and uses less energy than having to entirely melt the ice into pure water. We’re looking at potentially over a tenfold reduction in heating requirements if you only have to partially melt the ice.”

That said, “Most practical applications don’t start from knowing the application beforehand,” said Boreyko. “It starts from ‘Oh, that’s a really cool phenomenon. What’s going on here?’ It’s only downstream from that it turns out you can use this for better defrosting of heat exchangers for heat pumps. I just think it’s fun to say that we can make a little melting disk of ice very suddenly slingshot across the table. It’s a neat way to grab your attention and think more about melting and ice and how all this stuff works.”

DOI: ACS Applied Materials and Interfaces, 2025. 10.1021/acsami.5c08993  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ice discs slingshot across a metal surface all on their own Read More »

misunderstood-“photophoresis”-effect-could-loft-metal-sheets-to-exosphere

Misunderstood “photophoresis” effect could loft metal sheets to exosphere


Photophoresis can generate a tiny bit of lift without any moving parts.

Image of a wooden stand holding a sealed glass bulb with a spinning set of vanes, each of which has a lit and dark side.

Most people would recognize the device in the image above, although they probably wouldn’t know it by its formal name: the Crookes radiometer. As its name implies, placing the radiometer in light produces a measurable change: the blades start spinning.

Unfortunately, many people misunderstand the physics of its operation (which we’ll return to shortly). The actual forces that drive the blades to spin, called photophoresis, can act on a variety of structures as long as they’re placed in a sufficiently low-density atmosphere. Now, a team of researchers has figured out that it may be possible to use the photophoretic effect to loft thin sheets of metal into the upper atmosphere of Earth and other planets. While their idea is to use it to send probes to the portion of the atmosphere that’s too high for balloons and too low for satellites, they have tested some working prototypes a bit closer to the Earth’s surface.

Photophoresis

It’s quite common—and quite wrong—to see explanations of the Crookes radiometer that involve radiation pressure. Supposedly, the dark sides of the blades absorb more photons, each of which carries a tiny bit of momentum, giving the dark side of the blades a consistent push. The problem with this explanation is that photons are bouncing off the silvery side, which imparts even more momentum. If the device were spinning due to radiation pressure, it would be turning in the opposite direction than it actually does.

An excess of the absorbed photons on the dark side is key to understanding how it works, though. Photophoresis operates through the temperature difference that develops between the warm, light-absorbing dark side of the blade and the cooler silvered side.

Any gas molecule that bumps into the dark side will likely pick up some of the excess thermal energy from it and move away from the blade faster than it arrived. At the sorts of atmospheric pressures we normally experience, these molecules don’t get very far before they bump into other gas molecules, which keeps any significant differences from developing.

But a Crookes radiometer is in a sealed glass container with a far lower air pressure. This allows the gas molecules to speed off much farther from the dark surface of the blade before they run into anything, creating an area of somewhat lower pressure at its surface. That causes gas near the surface of the shiny side to rush around and fill this lower-pressure area, imparting the force that starts the blades turning.

It’s pretty impressively inefficient in that sort of configuration, though. So people have spent a lot of time trying to design alternative configurations that can generate a bit more force. One idea with a lot of research traction is a setup that involves two thin metal sheets—one light, one dark—arranged parallel to each other. Both sheets would be heavily perforated to cut down on weight. And a subset of them would have a short pipe connecting holes on the top and bottom sheet. (This has picked up the nickname “nanocardboard.”)

These pipes would serve several purposes. One is to simply link the two sheets into a single unit. Another is to act as an insulator, keeping heat from moving from the dark sheet to the light one, and thus enhancing the temperature gradient. Finally, they provide a direct path for air to move from the top of the light-colored sheet to the bottom of the dark one, giving a bit of directed thrust to help keep the sheets aloft.

Optimization

As you might imagine, there are a lot of free parameters you can tweak: the size of the gap between the sheets, the density of perforations in them, the number of those holes that are connected by a pipe, and so on. So a small team of researchers developed a system to model different configurations and attempt to optimize for lift. (We’ll get to their motivations for doing so a bit later.)

Starting with a disk of nanocardboard, “The inputs to the model are the geometric, optical and thermal properties of the disk, ambient gas conditions, and external radiative heat fluxes on the disk,” as the researchers describe it. “The outputs are the conductive heat fluxes on the two membranes, the membrane temperatures, and the net photophoretic lofting force on the structure.” In general, the ambient gas conditions needed to generate lift are similar to the ones inside the Crookes radiometer: well below the air pressure at sea level.

The model suggested that three trends should influence any final designs. The first is that the density of perforations is a balance. At relatively low elevations (meaning a denser atmosphere), many perforations increase the stress on large sheets, but they decrease the stress for small items at high elevations. The other thing is that, rather than increasing with surface area, lift tends to drop because the sheets are more likely to equilibrate to the prevailing temperatures. A square millimeter of nanocardboard produces over 10 times more lift per surface area than a 10-square-centimeter piece of the same material.

Finally, the researchers calculate that the lift is at its maximum in the mesosphere, the area just above the stratosphere (50–100 kilometers above Earth’s surface).

Light and lifting

The researchers then built a few sheets of nanocardboard to test the output of their model. The actual products, primarily made of chromium, aluminum, and aluminum oxide, were incredibly light, weighing only a gram for a square meter of material. When illuminated by a laser or white LED, they generated measurable force on a testing device, provided the atmosphere was kept sufficiently sparse. With an exposure equivalent to sunlight, the device generated more than it weighed.

It’s a really nice demonstration that we can take a relatively obscure and weak physical effect and design devices that can levitate in the upper atmosphere, powered by nothing more than sunlight—which is pretty cool.

But the researchers have a goal beyond that. The mesophere turns out to be a really difficult part of the atmosphere to study. It’s not dense enough to support balloons or aircraft, but it still has enough gas to make quick work of any satellites. So the researchers really want to turn one of these devices into an instrument-carrying aircraft. Unfortunately, that would mean adding the structural components needed to hold instruments, along with the instruments themselves. And even in the mesosphere, where lift is optimal, these things do not generate much in the way of lift.

Plus, there’s the issue of getting them there, given that they won’t generate enough lift in the lower atmosphere, so they’ll have to be carried into the upper stratosphere by something else and then be released gently enough to not damage their fragile structure. And then, unless you’re lofting them during the polar summer, they will likely come floating back down at night.

None of this is to say this is an impossible dream. But there are definitely a lot of very large hurdles between the work and practical applications on Earth—much less on Mars, where the authors suggest the system could also be used to explore the mesosphere. But even if that doesn’t end up being realistic, this is still a pretty neat bit of physics.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Misunderstood “photophoresis” effect could loft metal sheets to exosphere Read More »