Physics

meet-the-2025-ig-nobel-prize-winners

Meet the 2025 Ig Nobel Prize winners


The annual award ceremony features miniature operas, scientific demos, and the 24/7 lectures.

The Ig Nobel Prizes honor “achievements that first make people laugh and then make them think.” Credit: Aurich Lawson / Getty Images

Does alcohol enhance one’s foreign language fluency? Do West African lizards have a preferred pizza topping? And can painting cows with zebra stripes help repel biting flies? These and other unusual research questions were honored tonight in a virtual ceremony to announce the 2025 recipients of the annual Ig Nobel Prizes. Yes, it’s that time of year again, when the serious and the silly converge—for science.

Established in 1991, the Ig Nobels are a good-natured parody of the Nobel Prizes; they honor “achievements that first make people laugh and then make them think.” The unapologetically campy awards ceremony features miniature operas, scientific demos, and the 24/7 lectures whereby experts must explain their work twice: once in 24 seconds and the second in just seven words.

Acceptance speeches are limited to 60 seconds. And as the motto implies, the research being honored might seem ridiculous at first glance, but that doesn’t mean it’s devoid of scientific merit. In the weeks following the ceremony, the winners will also give free public talks, which will be posted on the Improbable Research website.

Without further ado, here are the winners of the 2025 Ig Nobel prizes.

Biology

Example of the area of legs and body used to count biting flies on cows.

Credit: Tomoki Kojima et al., 2019

Citation: Tomoki Kojima, Kazato Oishi, Yasushi Matsubara, Yuki Uchiyama, Yoshihiko Fukushima, Naoto Aoki, Say Sato, Tatsuaki Masuda, Junichi Ueda, Hiroyuki Hirooka, and Katsutoshi Kino, for their experiments to learn whether cows painted with zebra-like striping can avoid being bitten by flies.

Any dairy farmer can tell you that biting flies are a pestilent scourge for cattle herds, which is why one so often sees cows throwing their heads, stamping their feet, flicking their tails, and twitching their skin—desperately trying to shake off the nasty creatures. There’s an economic cost as well since it causes the cattle to graze and feed less, bed down for shorter times, and start bunching together, which increases heat stress and risks injury to the animals. That results in less milk yield for dairy cows and less beef yields from feedlot cattle.

You know who isn’t much bothered by biting flies? The zebra. Scientists have long debated the function of the zebra’s distinctive black-and-white striped pattern. Is it for camouflage? Confusing potential predators? Or is it to repel those pesky flies? Tomoki Kojima et al. decided to put the latter hypothesis to the test, painting zebra stripes on six pregnant Japanese black cows at the Aichi Agricultural Research Center in Japan. They used water-borne lacquers that washed away after a few days, so the cows could take turns being in three different groups: zebra stripes, just black stripes, or no stripes (as a control).

The results: the zebra stripes significantly decreased both the number of biting flies on the cattle and the animals’ fly-repelling behaviors compared to those with black stripes or no stripes. The one exception was for skin twitching—perhaps because it is the least energy intensive of those behaviors. Why does it work? The authors suggest it might have something to do with modulation brightness or polarized light that confuses the insects’ motion detection system, used to control their approach when landing on a surface. But that’s a topic for further study.

Chemistry

Freshly cooked frozen w:blintzes in a non-stick frying pan coated with Teflon

Credit: Andrevan/CC BY-SA 2.5

Citation: Rotem Naftalovich, Daniel Naftalovich, and Frank Greenway, for experiments to test whether eating Teflon [a form of plastic more formally called “polytetrafluoroethylene”] is a good way to increase food volume and hence satiety without increasing calorie content.

Diet sodas and other zero-calorie drinks are a mainstay of the modern diet, thanks to the development of artificial sweeteners whose molecules can’t be metabolized by the human body. The authors of this paper are intrigued by the notion of zero-calorie foods, which they believe could be achieved by increasing the satisfying volume and mass of food without increasing the calories. And they have just the additive for that purpose: polytetrafluoroethylene (PTFE), more commonly known as Teflon.

Yes, the stuff they use on nonstick cookware. They insist that Teflon is inert, heat-resistant, impervious to stomach acid, tasteless, cost-effective, and available in handy powder form for easy mixing into food. They recommend a ratio of three parts food to one part Teflon powder.

The authors understand that to the average layperson, this is going to sound like a phenomenally bad idea—no thank you, I would prefer not to have powdered Teflon added to my food. So they spend many paragraphs citing all the scientific studies on the safety of Teflon—it didn’t hurt rats in feeding trials!—as well as the many applications for which it is already being used. These include Teflon-coated stirring rods used in labs and coatings on medical devices like bladder catheters and gynecological implants, as well as the catheters used for in vitro fertilization. And guys, you’ll be happy to know that Teflon doesn’t seem to affect sperm motility or viability. I suspect this will still be a hard sell in the consumer marketplace.

Physics

Cacio e pepe is an iconic pasta dish that is also frustratingly difficult to make

Credit: Simone Frau

Citation: Giacomo Bartolucci, Daniel Maria Busiello, Matteo Ciarchi, Alberto Corticelli, Ivan Di Terlizzi, Fabrizio Olmeda, Davide Revignas, and Vincenzo Maria Schimmenti, for discoveries about the physics of pasta sauce, especially the phase transition that can lead to clumping, which can be a cause of unpleasantness.

“Pasta alla cacio e pepe” is a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. The dish is notoriously challenging to make because it’s so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy. As we reported in April, Italian physicists came to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.

Traditionally, the chef will extract part of the water and starch solution—which is cooled to a suitable temperature to avoid clumping as the cheese proteins “denaturate”—and mix it with the cheese to make the sauce, adding the pepper last, right before serving. But the authors note that temperature is not the only factor that can lead to this dreaded “mozzarella phase.” If one tries to mix cheese and water without any starch, the clumping is more pronounced. There is less clumping with water containing a little starch, like water in which pasta has been cooked. And when one mixes the cheese with pasta water “risottata”—i.e., collected and heated in a pan so enough water evaporates that there is a higher concentration of starch—there is almost no clumping.

The authors found that the correct starch ratio is between 2 to 3 percent of the cheese weight. Below that, you get the clumping phase separation; above that, and the sauce becomes stiff and unappetizing as it cools. Pasta water alone contains too little starch. Using pasta water “risottata” may concentrate the starch, but the chef has less control over the precise amount of starch. So the authors recommend simply dissolving 4 grams of powdered potato or corn starch in 40 grams of water, heating it gently until it thickens and combining that gel with the cheese. They also recommend toasting the black pepper briefly before adding it to the mixture to enhance its flavors and aromas.

Engineering Design

Experimental set-up (a) cardboard enclosure (b) UV-C tube light (c) SMPS

Credit: Vikash Kumar and Sarthak Mittal

Citation: Vikash Kumar and Sarthak Mittal, for analyzing, from an engineering design perspective, “how foul-smelling shoes affects the good experience of using a shoe-rack.”

Shoe odor is a universal problem, even in India, according to the authors of this paper, who hail from Shiv Nadar University (SNU) in Uttar Pradesh. All that heat and humidity means people perspire profusely when engaging even in moderate physical activity. Add in a lack of proper ventilation and washing, and shoes become a breeding ground for odor-causing bacteria called Kytococcus sedentarius. Most Indians make use of shoe racks to store their footwear, and the odors can become quite intense in that closed environment.

Yet nobody has really studied the “smelly shoe” problem when it comes to shoe racks. Enter Kumar and Mittal, who conducted a pilot study with the help of 149 first-year SNU students. More than half reported feeling uncomfortable about their own or someone else’s smelly shoes, and 90 percent kept their shoes in a shoe rack. Common methods to combat the odor included washing the shoes and drying them in the sun; using spray deodorant; or sprinkling the shoes with an antibacterial powder. They were unaware of many current odor-combatting products on the market, such as tea tree and coconut oil solutions, thyme oil, or isopropyl alcohol.

Clearly, there is an opportunity to make a killing in the odor-resistant shoe rack market. So naturally Kumar and Mittal decided to design their own version. They opted to use bacteria-killing UV rays (via a UV-C tube light) as their built-in “odor eater,” testing their device on the shoes of several SNU athletes, “which had a very strong noticeable odor.” They concluded that an exposure time of two to three minutes was sufficient to kill the bacteria and get rid of the odor.

Aviation

Wing membranes (patagia) of Townsend's big-eared bat, Corynorhinus townsendii

Credit: Public domain

Citation: Francisco Sánchez, Mariana Melcón, Carmi Korine, and Berry Pinshow, for studying whether ingesting alcohol can impair bats’ ability to fly and also their ability to echolocate.

Nature is rife with naturally occurring ethanol, particularly from ripening fruit, and that fruit in turn is consumed by various microorganisms and animal species. There are occasional rare instances of some mammals, birds, and even insects consuming fruit rich in ethanol and becoming intoxicated, making those creatures more vulnerable to potential predators or more accident-prone due to lessened motor coordination. Sánchez et al. decided to look specifically at the effects of ethanol on Egyptian fruit bats, which have been shown to avoid high-ethanol fruit. The authors wondered if this might be because the bats wanted to avoid becoming inebriated.

They conducted their experiments on adult male fruit bats kept in an outdoor cage that served as a long flight corridor. The bats were given liquid food with varying amounts of ethanol and then released in the corridor, with the authors timing how long it took each bat to fly from one end to the other. A second experiment followed the same basic protocol, but this time the authors recorded the bats’ echolocation calls with an ultrasonic microphone. The results: The bats that received liquid food with the highest ethanol content took longer to fly the length of the corridor, evidence of impaired flight ability. The quality of those bats’ echolocation was also adversely affected, putting them at a higher risk of colliding with obstacles mid-flight.

Psychology

Narcissus (1597–99) by Caravaggio; the man in love with his own reflection

Credit: Public domain

Citation: Marcin Zajenkowski and Gilles Gignac, for investigating what happens when you tell narcissists—or anyone else—that they are intelligent.

Not all narcissists are created equal. There are vulnerable narcissists who tend to be socially withdrawn, have low self-esteem, and are prone to negative emotions. And then there are grandiose narcissists, who exhibit social boldness, high self-esteem, and are more likely to overestimate their own intelligence. The prevailing view is that this overconfidence stems from narcissism. The authors wanted to explore whether this effect might also work in reverse, i.e., that believing one has superior intelligence due to positive external feedback can lead to at least a temporary state of narcissism.

Zajenkowski et al. recruited 361 participants from Poland who were asked to rate their level of intelligence compared to other people; complete the Polish version of the Narcissistic Personality Inventory; and take an IQ test to compare their perceptions of their own intelligence with an objective measurement. The participants were then randomly assigned to one of two groups. One group received positive feedback—telling them they did indeed have a higher IQ than most people—while the other received negative feedback.

The results confirmed most of the researchers’ hypotheses. In general, participants gave lower estimates of their relative intelligence after completing the IQ test, which provided an objective check of sorts. But the type of feedback they received had a measurable impact. Positive feedback enhanced their feelings of uniqueness (a key aspect of grandiose narcissism). Those who received negative feedback rated their own intelligence as being lower, and that negative feedback had a larger effect than positive feedback. The authors concluded that external feedback helped shape the subjects’ perception of their own intelligence, regardless of the accuracy of that feedback.

Nutrition

Rainbow lizards eating ‘four cheese’ pizza at a seaside touristic resort in Togo.

Credit: Daniele Dendi et al, 2022

Citation: Daniele Dendi, Gabriel H. Segniagbeto, Roger Meek, and Luca Luiselli, for studying the extent to which a certain kind of lizard chooses to eat certain kinds of pizza.

Move over, Pizza Rat, here come the Pizza Lizards—rainbow lizards, to be precise. This is a species common to urban and suburban West Africa. The lizards primarily live off insects and arthropods, but their proximity to humans has led to some developing a more omnivorous approach to their foraging. Bread is a particular favorite. Case in point: One fine sunny day at a Togo seaside resort, the authors noticed a rainbow lizard stealing a tourist’s slice of four-cheese pizza and happily chowing down.

Naturally, they wanted to know if this was an isolated incident or whether the local rainbow lizards routinely feasted on pizza slices. And did the lizards have a preferred topping? Inquiring minds need to know. So they monitored the behavior of nine particular lizards, giving them the choice between a plate of four-cheese pizza and a plate of “four seasons” pizza, spaced about 10 meters apart.

It only took 15 minutes for the lizards to find the pizza and eat it, sometimes fighting over the remaining slices. But they only ate the four-cheese pizza. For the authors, this suggests there might be some form of chemical cues that attract them to the cheesy pizzas, or perhaps it’s easier for them to digest. I’d love to see how the lizards react to the widely derided Canadian bacon and pineapple pizza.

Pediatrics

Pumped breast milk in bottles

Citation: Julie Mennella and Gary Beauchamp, for studying what a nursing baby experiences when the baby’s mother eats garlic.

Mennella and Beauchamp designed their experiment to investigate two questions: whether the consumption of garlic altered the odor of a mother’s breast milk, and if so, whether those changes affected the behavior of nursing infants. (Garlic was chosen because it is known to produce off flavors in dairy cow milk and affect human body odor.) They recruited eight women who were exclusively breastfeeding their infants, taking samples of their breast milk over a period when the participants abstained from eating sulfurous foods (garlic, onion, asparagus), and more samples after the mothers consumed either a garlic capsule or a placebo.

The results: Mothers who ingested the garlic capsules produced milk with a perceptibly more intense odor, as evaluated by several adult panelists brought in to sniff the breast milk samples. The strong odor peaked at two hours after ingestion and decreased fats, which is consistent with prior research on cows that ingested highly odorous feeds. As for the infants, those whose mothers ingested garlic attached to the breast for longer periods and sucked more when the milk smelled like garlic. This could be relevant to ongoing efforts to determine whether sensory experiences during breastfeeding can influence how readily infants accept new foods upon weaning, and perhaps even their later food preferences.

Literature

closeup of a hand with clubbed fingernails

Credit: William B. Bean

Citation: The late Dr. William B. Bean, for persistently recording and analyzing the rate of growth of one of his fingernails over a period of 35 years.

If you’re surprised to see a study on fingernail growth rates under the Literature category, it will all make sense once you read the flowery prose stylings of Dr. Bean. He really did keep detailed records of how fast his fingernails grew for 35 years, claiming in his final report that “the nail provides a slowly moving keratin kymograph that measures age on the inexorable abscissa of time.” He sprinkles his observations with ponderous references to medieval astrology, James Boswell, and Moby Dick, with a dash of curmudgeonly asides bemoaning the sterile modern medical teaching methods that permeate “the teeming mass of hope and pain, technical virtuosity, and depersonalization called a ‘health center.'”

So what did our pedantic doctor discover in those 35 years, not just studying his own nails, but meticulously reviewing all the available scientific literature? Well, for starters, the rate of fingernail growth diminishes as one ages; Bean noted that his growth rates remained steady early on, but “slowed down a trifle” over the last five years of his project. Nails grow faster in children than adults. A warm environment can also accelerate growth, as does biting one’s fingernails—perhaps, he suggests, because the biting stimulates blood flow to the area. And he debunks the folklore of hair and nails growing even after death: it’s just the retraction and contraction of the skin post-mortem that makes it seem like the nails are growing.

Peace

Citation: Fritz Renner, Inge Kersbergen, Matt Field, and Jessica Werthmann, for showing that drinking alcohol sometimes improves a person’s ability to speak in a foreign language.

Alcohol is well-known to have detrimental effects on what’s known in psychological circles as “executive functioning,” impacting things like working memory and inhibitory control. But there’s a widespread belief among bilingual people that a little bit of alcohol actually improves one’s fluency in a foreign language, which also relies on executive functioning. So wouldn’t being intoxicated actually have an adverse effect on foreign language fluency? Renner et al. decided to investigate further.

They recruited 50 native German-speaking undergrad psychology students at Maastricht University in the Netherlands who were also fluent in Dutch. They were randomly divided into two groups. One group received an alcoholic drink (vodka with bitter lemon), and the other received water. Each participant consumed enough to be slightly intoxicated after 15 minutes, and then engaged in a discussion in Dutch with a native Dutch speaker. Afterward, they were asked to rate their self-perception of their skill at Dutch, with the Dutch speakers offering independent observer ratings.

The researchers were surprised to find that intoxication improved the participants’ Dutch fluency, based on the independent observer reports. (Self-evaluations were largely unaffected by intoxication levels.) One can’t simply attribute this to so-called “Dutch courage,” i.e., increased confidence associated with intoxication. Rather, the authors suggest that intoxication lowers language anxiety, thereby increasing one’s foreign language proficiency, although further research would be needed to support that hypothesis.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Meet the 2025 Ig Nobel Prize winners Read More »

scientists-unlock-secret-to-thick,-stable-beer-foams

Scientists unlock secret to thick, stable beer foams

For many beer lovers, a nice thick head of foam is one of life’s pure pleasures, and the longer that foam lasts, the better the beer-drinking experience. A team of Swiss researchers spent seven years studying why some beer foams last longer than others and found that the degree of fermentation—i.e., whether a given beer has been singly, doubly, or triply fermented—is crucial, according to a new paper published in the journal Physics of Fluids.

As previously reported, foams are ubiquitous in everyday life, found in foods (whipped cream), beverages (beer, cappuccino), shaving cream and hair-styling mousse, packing peanuts, building insulation, flame-retardant materials, and so forth. All foams are the result of air being beaten into a liquid formula that contains some kind of surfactant (active surface agent), usually fats or proteins in edible foams, or chemical additives in non-edible products. That surfactant strengthens the liquid film walls of the bubbles to keep them from collapsing.

Individual bubbles typically form a sphere because that’s the shape with the minimum surface area for any volume and hence is the most energy-efficient. One reason for the minimizing principle when it comes to a bubble’s shape is that many bubbles can then tightly pack together to form a foam. But bubbles “coarsen” over time, the result of gravity pulling down on the liquid and thinning out the walls. Eventually, they start to look more like soccer balls (polyhedrons). In a coarsening foam, smaller bubbles are gradually absorbed by larger ones. There is less and less liquid to separate the individual bubbles, so they press together to fill the space.

This “jamming” is why foams are typically far more rigid than their gas (95 percent) and liquid (5 percent) components. The more tightly the bubbles jam together, the less they can move around and the greater the pressure inside them becomes, giving them properties of a solid.

Various factors can affect foam stability. For instance, in 2019, Japanese researchers investigated a phenomenon known as “collective bubble collapse,” or CBC, in which breaking one bubble at the edge of a foam results in a cascading effect as the breakage spreads to other bubbles in the foam. They identified two distinct mechanisms for the resulting CBCs: a so-called “propagating mode,” in which a broken bubble is absorbed into the liquid film, and a “penetrating mode,” in which the breakage of a bubble causes droplets to shoot off and hit other bubbles, causing them to break in turn.

Scientists unlock secret to thick, stable beer foams Read More »

physics-of-badminton’s-new-killer-spin-serve

Physics of badminton’s new killer spin serve

Serious badminton players are constantly exploring different techniques to give them an edge over opponents. One of the latest innovations is the spin serve, a devastatingly effective method in which a player adds a pre-spin just before the racket contacts the shuttlecock (aka the birdie). It’s so effective—some have called it “impossible to return“—that the Badminton World Federation (BWF) banned the spin serve in 2023, at least until after the 2024 Paralympic Games in Paris.

The sanction wasn’t meant to quash innovation but to address players’ concerns about the possible unfair advantages the spin serve conferred. The BWF thought that international tournaments shouldn’t become the test bed for the technique, which is markedly similar to the previously banned “Sidek serve.” The BWF permanently banned the spin serve earlier this year. Chinese physicists have now teased out the complex fundamental physics of the spin serve, publishing their findings in the journal Physics of Fluids.

Shuttlecocks are unique among the various projectiles used in different sports due to their open conical shape. Sixteen overlapping feathers protrude from a rounded cork base that is usually covered in thin leather. The birdies one uses for leisurely backyard play might be synthetic nylon, but serious players prefer actual feathers.

Those overlapping feathers give rise to quite a bit of drag, such that the shuttlecock will rapidly decelerate as it travels and its parabolic trajectory will fall at a steeper angle than its rise. The extra drag also means that players must exert quite a bit of force to hit a shuttlecock the full length of a badminton court. Still, shuttlecocks can achieve top speeds of more than 300 mph. The feathers also give the birdie a slight natural spin around its axis, and this can affect different strokes. For instance, slicing from right to left, rather than vice versa, will produce a better tumbling net shot.

Chronophotographies of shuttlecocks after an impact with a racket

Chronophotographies of shuttlecocks after an impact with a racket. Credit: Caroline Cohen et al., 2015

The cork base makes the birdie aerodynamically stable: No matter how one orients the birdie, once airborne, it will turn so that it is traveling cork-first and will maintain that orientation throughout its trajectory. A 2015 study examined the physics of this trademark flip, recording flips with high-speed video and conducting free-fall experiments in a water tank to study how its geometry affects the behavior. The latter confirmed that shuttlecock feather geometry hits a sweet spot in terms of an opening inclination angle that is neither too small nor too large. And they found that feather shuttlecocks are indeed better than synthetic ones, deforming more when hit to produce a more triangular trajectory.

Physics of badminton’s new killer spin serve Read More »

ice-discs-slingshot-across-a-metal-surface-all-on-their-own

Ice discs slingshot across a metal surface all on their own


VA Tech experiment was inspired by Death Valley’s mysterious “sailing stones” at Racetrack Playa.

Graduate student Jack Tapocik sets up ice on an engineered surface in the VA Tech lab of Jonathan Boreyko. Credit: Alex Parrish/Virginia Tech

Scientists have figured out how to make frozen discs of ice self-propel across a patterned metal surface, according to a new paper published in the journal ACS Applied Materials and Interfaces. It’s the latest breakthrough to come out of the Virginia Tech lab of mechanical engineer Jonathan Boreyko.

A few years ago, Boreyko’s lab experimentally demonstrated a three-phase Leidenfrost effect in water vapor, liquid water, and ice. The Leidenfrost effect is what happens when you dash a few drops of water onto a very hot, sizzling skillet. The drops levitate, sliding around the pan with wild abandon. If the surface is at least 400° Fahrenheit (well above the boiling point of water), cushions of water vapor, or steam, form underneath them, keeping them levitated. The effect also works with other liquids, including oils and alcohol, but the temperature at which it manifests will be different.

Boreyko’s lab discovered that this effect can also be achieved in ice simply by placing a thin, flat disc of ice on a heated aluminum surface. When the plate was heated above 150° C (302° F), the ice did not levitate on a vapor the way liquid water does. Instead, there was a significantly higher threshold of 550° Celsius (1,022° F) for levitation of the ice to occur. Unless that critical threshold is reached, the meltwater below the ice just keeps boiling in direct contact with the surface. Cross that critical point and you will get a three-phase Leidenfrost effect.

The key is a temperature differential in the meltwater just beneath the ice disc. The bottom of the meltwater is boiling, but the top of the meltwater sticks to the ice. It takes a lot to maintain such an extreme difference in temperature, and doing so consumes most of the heat from the aluminum surface, which is why it’s harder to achieve levitation of an ice disc. Ice can suppress the Leidenfrost effect even at very high temperatures (up to 550° C), which means that using ice particles instead of liquid droplets would be better for many applications involving spray quenching: rapid cooling in nuclear power plants, for example, firefighting, or rapid heat quenching when shaping metals.

This time around, Boreyko et al. have turned their attention to what the authors term “a more viscous analog” to a Leidenfrost ratchet, a form of droplet self-propulsion. “What’s different here is we’re no longer trying to levitate or even boil,” Boreyko told Ars. “Now we’re asking a more straightforward question: Is there a way to make ice move across the surface directionally as it is melting? Regular melting at room temperature. We’re not boiling, we’re not levitating, we’re not Leidenfrosting. We just want to know, can we make ice shoot across the surface if we design a surface in the right way?”

Mysterious moving boulders

The researchers were inspired by Death Valley’s famous “sailing stones” on Racetrack Playa. Watermelon-sized boulders are strewn throughout the dry lake bed, and they leave trails in the cracked earth as they slowly migrate a couple of hundred meters each season. Scientists didn’t figure out what was happening until 2014. Although co-author Ralph Lorenz (Johns Hopkins University) admitted he thought theirs would be “the most boring experiment ever” when they first set it up in 2011, two years later, the boulders did indeed begin to move while the playa was covered with a pond of water a few inches deep.

So Lorenz and his co-authors were finally able to identify the mechanism. The ground is too hard to absorb rainfall, and that water freezes when the temperature drops. When temperatures rise above freezing again, the ice starts to melt, creating ice rafts floating on the meltwater. And when the winds are sufficiently strong, they cause the ice rafts to drift along the surface.

A sailing stone in Death Valley's Racetrack Playa.

A sailing stone at Death Valley’s Racetrack Playa. Credit: Tahoenathan/CC BY-SA 3.0

“Nature had to have wind blowing to kind of push the boulder and the ice along the meltwater that was beneath the ice,” said Boreyko. “We thought, what if we could have a similar idea of melting ice moving directionally but use an engineered structure to make it happen spontaneously so we don’t have to have energy or wind or anything active to make it work?”

The team made their ice discs by pouring distilled water into thermally insulated polycarbonate Petrie dishes. This resulted in bottom-up freezing, which minimizes air bubbles in the ice. They then milled asymmetric grooves into uncoated aluminum plates in a herringbone pattern—essentially creating arrowhead-shaped channels—and then bonded them to hot plates heated to the desired temperature. Each ice disc was placed on the plate with rubber tongs, and the experiments were filmed from various angles to fully capture the disc behavior.

The herringbone pattern is the key. “The directionality is what really pushes the water,” Jack Tapocik, a graduate student in Boreyko’s lab, told Ars. “The herringbone doesn’t allow for water to flow backward, the water has to go forward, and that basically pushes the water and the ice together forward. We don’t have a treated surface, so the water just sits on top and the ice all moves as one unit.”

Boreyko draws an analogy to tubing on a river, except it’s the directional channels rather than gravity causing the flow. “You can see [in the video below] how it just follows the meltwater,” he said. “This is your classic entrainment mechanism where if the water flows that way and you’re floating on the water, you’re going to go the same way, too. It’s basically the same idea as what makes a Leidenfrost droplet also move one way: It has a vapor flow underneath. The only difference is that was a liquid drifting on a vapor flow, whereas now we have a solid drifting on a liquid flow. The densities and viscosities are different, but the idea is the same: You have a more dense phase that is drifting on the top of a lighter phase that is flowing directionally.”

Jonathan Boreyko/Virginia Tech

Next, the team repeated the experiment, this time coating the aluminum herringbone surface with water-repellant spray, hoping to speed up the disc propulsion. Instead, they found that the disc ended up sticking to the treated surface for a while before suddenly slingshotting across the metal plate.

“It’s a totally different concept with totally different physics behind it, and it’s so much cooler,” said Tapocik. “As the ice is melting on these coated surfaces, the water just doesn’t want to sit within the channels. It wants to sit on top because of the [hydrophobic] coating we have on there. The ice is directly sticking now to the surface, unlike before when it was floating. You get this elongated puddle in front. The easiest place [for the ice] to be is in the center of this giant, long puddle. So it re-centers, and that’s what moves it forward like a slingshot.”

Essentially, the water keeps expanding asymmetrically, and that difference in shape gives rise to a mismatch in surface tension because the amount of force that surface tension exerts on a body depends on curvature. The flatter puddle shape in front has less curvature than the smaller shape in back. As the video below shows, when the mismatch in surface tension becomes sufficiently strong, “It just rips the ice off the surface and flings it along,” said Boreyko. “In the future, we could try putting little things like magnets on top of the ice. We could probably put a boulder on it if we wanted to. The Death Valley effect would work with or without a boulder because it’s the floating ice raft that moves with the wind.”

Jonathan Boreyko/Virginia Tech

One potential application is energy harvesting. For example, one could pattern the metal surface in a circle rather than a straight line so the melting ice disk would continually rotate. Put magnets on the disk, and they would also rotate and generate power. One might even attach a turbine or gear to the rotating disc.

The effect might also provide a more energy-efficient means of defrosting, a longstanding research interest for Boreyko. “If you had a herringbone surface with a frosting problem, you could melt the frost, even partially, and use these directional flows to slingshot the ice off the surface,” he said. “That’s both faster and uses less energy than having to entirely melt the ice into pure water. We’re looking at potentially over a tenfold reduction in heating requirements if you only have to partially melt the ice.”

That said, “Most practical applications don’t start from knowing the application beforehand,” said Boreyko. “It starts from ‘Oh, that’s a really cool phenomenon. What’s going on here?’ It’s only downstream from that it turns out you can use this for better defrosting of heat exchangers for heat pumps. I just think it’s fun to say that we can make a little melting disk of ice very suddenly slingshot across the table. It’s a neat way to grab your attention and think more about melting and ice and how all this stuff works.”

DOI: ACS Applied Materials and Interfaces, 2025. 10.1021/acsami.5c08993  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ice discs slingshot across a metal surface all on their own Read More »

misunderstood-“photophoresis”-effect-could-loft-metal-sheets-to-exosphere

Misunderstood “photophoresis” effect could loft metal sheets to exosphere


Photophoresis can generate a tiny bit of lift without any moving parts.

Image of a wooden stand holding a sealed glass bulb with a spinning set of vanes, each of which has a lit and dark side.

Most people would recognize the device in the image above, although they probably wouldn’t know it by its formal name: the Crookes radiometer. As its name implies, placing the radiometer in light produces a measurable change: the blades start spinning.

Unfortunately, many people misunderstand the physics of its operation (which we’ll return to shortly). The actual forces that drive the blades to spin, called photophoresis, can act on a variety of structures as long as they’re placed in a sufficiently low-density atmosphere. Now, a team of researchers has figured out that it may be possible to use the photophoretic effect to loft thin sheets of metal into the upper atmosphere of Earth and other planets. While their idea is to use it to send probes to the portion of the atmosphere that’s too high for balloons and too low for satellites, they have tested some working prototypes a bit closer to the Earth’s surface.

Photophoresis

It’s quite common—and quite wrong—to see explanations of the Crookes radiometer that involve radiation pressure. Supposedly, the dark sides of the blades absorb more photons, each of which carries a tiny bit of momentum, giving the dark side of the blades a consistent push. The problem with this explanation is that photons are bouncing off the silvery side, which imparts even more momentum. If the device were spinning due to radiation pressure, it would be turning in the opposite direction than it actually does.

An excess of the absorbed photons on the dark side is key to understanding how it works, though. Photophoresis operates through the temperature difference that develops between the warm, light-absorbing dark side of the blade and the cooler silvered side.

Any gas molecule that bumps into the dark side will likely pick up some of the excess thermal energy from it and move away from the blade faster than it arrived. At the sorts of atmospheric pressures we normally experience, these molecules don’t get very far before they bump into other gas molecules, which keeps any significant differences from developing.

But a Crookes radiometer is in a sealed glass container with a far lower air pressure. This allows the gas molecules to speed off much farther from the dark surface of the blade before they run into anything, creating an area of somewhat lower pressure at its surface. That causes gas near the surface of the shiny side to rush around and fill this lower-pressure area, imparting the force that starts the blades turning.

It’s pretty impressively inefficient in that sort of configuration, though. So people have spent a lot of time trying to design alternative configurations that can generate a bit more force. One idea with a lot of research traction is a setup that involves two thin metal sheets—one light, one dark—arranged parallel to each other. Both sheets would be heavily perforated to cut down on weight. And a subset of them would have a short pipe connecting holes on the top and bottom sheet. (This has picked up the nickname “nanocardboard.”)

These pipes would serve several purposes. One is to simply link the two sheets into a single unit. Another is to act as an insulator, keeping heat from moving from the dark sheet to the light one, and thus enhancing the temperature gradient. Finally, they provide a direct path for air to move from the top of the light-colored sheet to the bottom of the dark one, giving a bit of directed thrust to help keep the sheets aloft.

Optimization

As you might imagine, there are a lot of free parameters you can tweak: the size of the gap between the sheets, the density of perforations in them, the number of those holes that are connected by a pipe, and so on. So a small team of researchers developed a system to model different configurations and attempt to optimize for lift. (We’ll get to their motivations for doing so a bit later.)

Starting with a disk of nanocardboard, “The inputs to the model are the geometric, optical and thermal properties of the disk, ambient gas conditions, and external radiative heat fluxes on the disk,” as the researchers describe it. “The outputs are the conductive heat fluxes on the two membranes, the membrane temperatures, and the net photophoretic lofting force on the structure.” In general, the ambient gas conditions needed to generate lift are similar to the ones inside the Crookes radiometer: well below the air pressure at sea level.

The model suggested that three trends should influence any final designs. The first is that the density of perforations is a balance. At relatively low elevations (meaning a denser atmosphere), many perforations increase the stress on large sheets, but they decrease the stress for small items at high elevations. The other thing is that, rather than increasing with surface area, lift tends to drop because the sheets are more likely to equilibrate to the prevailing temperatures. A square millimeter of nanocardboard produces over 10 times more lift per surface area than a 10-square-centimeter piece of the same material.

Finally, the researchers calculate that the lift is at its maximum in the mesosphere, the area just above the stratosphere (50–100 kilometers above Earth’s surface).

Light and lifting

The researchers then built a few sheets of nanocardboard to test the output of their model. The actual products, primarily made of chromium, aluminum, and aluminum oxide, were incredibly light, weighing only a gram for a square meter of material. When illuminated by a laser or white LED, they generated measurable force on a testing device, provided the atmosphere was kept sufficiently sparse. With an exposure equivalent to sunlight, the device generated more than it weighed.

It’s a really nice demonstration that we can take a relatively obscure and weak physical effect and design devices that can levitate in the upper atmosphere, powered by nothing more than sunlight—which is pretty cool.

But the researchers have a goal beyond that. The mesophere turns out to be a really difficult part of the atmosphere to study. It’s not dense enough to support balloons or aircraft, but it still has enough gas to make quick work of any satellites. So the researchers really want to turn one of these devices into an instrument-carrying aircraft. Unfortunately, that would mean adding the structural components needed to hold instruments, along with the instruments themselves. And even in the mesosphere, where lift is optimal, these things do not generate much in the way of lift.

Plus, there’s the issue of getting them there, given that they won’t generate enough lift in the lower atmosphere, so they’ll have to be carried into the upper stratosphere by something else and then be released gently enough to not damage their fragile structure. And then, unless you’re lofting them during the polar summer, they will likely come floating back down at night.

None of this is to say this is an impossible dream. But there are definitely a lot of very large hurdles between the work and practical applications on Earth—much less on Mars, where the authors suggest the system could also be used to explore the mesosphere. But even if that doesn’t end up being realistic, this is still a pretty neat bit of physics.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Misunderstood “photophoresis” effect could loft metal sheets to exosphere Read More »

scientists-hid-secret-codes-in-light-to-combat-video-fakes

Scientists hid secret codes in light to combat video fakes

Hiding in the light

Previously, the Cornell team had figured out how to make small changes to specific pixels to tell if a video had been manipulated or created by AI. But its success depended on the creator of the video using a specific camera or AI model. Their new method, “noise-coded illumination” (NCI), addresses those and other shortcomings by hiding watermarks in the apparent noise of light sources. A small piece of software can do this for computer screens and certain types of room lighting, while off-the-shelf lamps can be coded via a small attached computer chip.

“Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos,” Davis said. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations.” Because the watermark is designed to look like noise, it’s difficult to detect without knowing the secret code.

The Cornell team tested their method with a broad range of types of manipulation: changing warp cuts, speed and acceleration, for instance, and compositing and deep fakes. Their technique proved robust to things like signal levels below human perception; subject and camera motion; camera flash; human subjects with different skin tones; different levels of video compression; and indoor and outdoor settings.

“Even if an adversary knows the technique is being used and somehow figures out the codes, their job is still a lot harder,” Davis said. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.” That said, Davis added, “This is an important ongoing problem. It’s not going to go away, and in fact it’s only going to get harder,” he added.

DOI: ACM Transactions on Graphics, 2025. 10.1145/3742892  (About DOIs).

Scientists hid secret codes in light to combat video fakes Read More »

research-roundup:-7-cool-science-stories-we-almost-missed

Research roundup: 7 cool science stories we almost missed


Other July stories: Solving a 150-year-old fossil mystery and the physics of tacking a sailboat.

150-year-old fossil of Palaeocampa anthrax isn’t a sea worm after all. Credit: Christian McCall

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. July’s list includes the discovery of the tomb of the first Maya king of Caracol in Belize, the fluid dynamics of tacking a sailboat, how to determine how fast blood was traveling when it stained cotton fabric, and how the structure of elephant ears could lead to more efficient indoor temperature control in future building designs, among other fun stories.

Tomb of first king of Caracol found

University of Houston provost and archeologist Diane Chase in newly discovered tomb of the first ruler of the ancient Maya city Caracol and the founder of its royal dynasty.

Credit: Caracol Archeological Project/University of Houston

Archaeologists Arlen and Diane Chase are the foremost experts on the ancient Maya city of Caracol in Belize and are helping to pioneer the use of airborne LiDAR to locate hidden structures in dense jungle, including a web of interconnected roadways and a cremation site in the center of the city’s Northeast Acropolis plaza. They have been painstakingly excavating the site since the mid-1980s. Their latest discovery is the tomb of Te K’ab Chaak, Caracol’s first ruler, who took the throne in 331 CE and founded a dynasty that lasted more than 460 years.

This is the first royal tomb the husband-and-wife team has found in their 40+ years of excavating the Caracol site. Te K’ab Chaak’s tomb (containing his skeleton) was found at the base of a royal family shrine, along with pottery vessels, carved bone artifacts, jadeite jewelry, and a mosaic jadeite death mask. The Chases estimate that the ruler likely stood about 5’7″ tall and was probably quite old when he died, given his lack of teeth. The Chases are in the process of reconstructing the death mask and conducting DNA and stable isotope analysis of the skeleton.

How blood splatters on clothing

Cast-off blood stain pattern

Credit: Jimmy Brown/CC BY 2.0

Analyzing blood splatter patterns is a key focus in forensic science, and physicists have been offering their expertise for several years now, including in two 2019 studies on splatter patterns from gunshot wounds. The latest insights gleaned from physics concern the distinct ways in which blood stains cotton fabrics, according to a paper published in Forensic Science International.

Blood is a surprisingly complicated fluid, in part because the red blood cells in human blood can form long chains, giving it the consistency of sludge. And blood starts to coagulate immediately once it leaves the body. Blood is also viscoelastic: not only does it deform slowly when exposed to an external force, but once that force has been removed, it will return to its original configuration. Add in coagulation and the type of surface on which it lands, and correctly interpreting the resulting spatter patterns becomes incredibly difficult.

The co-authors of the July study splashed five different fabric surfaces with pig’s blood at varying velocities, capturing the action with high-speed cameras. They found that when a blood stain has “fingers” spreading out from the center, the more fingers there are, the faster the blood was traveling when it struck the fabric. And the faster the blood was moving, the more “satellite droplets” there will be—tiny stains surrounding the central stain. Finally, it’s much easier to estimate the velocity of blood splatter on plain-woven cotton than on other fabrics like twill. The researchers plan to extend future work to include a wider variety of fabrics, weaves, and yarns.

DOI: Forensic Science International, 2025. 10.1016/j.forsciint.2025.112543  (About DOIs).

Offshore asset practices of the uber-rich

The uber-rich aren’t like the rest of us in so many ways, including their canny exploitation of highly secretive offshore financial systems to conceal their assets and/or identities. Researchers at Dartmouth have used machine learning to analyze two public databases and identified distinct patterns in the strategies oligarchs and billionaires in 65 different countries employ when squirreling away offshore assets, according to a paper published in the journal PLoS ONE.

One database tracks offshore finance, while the other rates different countries on their “rule of law.” This enabled the team to study key metrics like how much of their assets elites move offshore, how much they diversify, and how much they make use of “blacklisted” offshore centers that are not part of the mainstream financial system. The researchers found three distinct patterns, all tied to where an oligarch comes from.

Billionaires from authoritarian countries are more likely to diversify their hidden assets across many different centers—a “confetti strategy”—perhaps because these are countries likely to exact political retribution. Others, from countries with effective government regulations—or where there is a pronounced lack of civil rights—are more likely to employ a “concealment strategy” that includes more blacklisted jurisdictions, relying more on bearer shares that protect their anonymity. Those elites most concerned about corruption and/or having their assets seized typically employ a hybrid strategy.

The work builds on an earlier 2023 study concluding that issuing sanctions on individual oligarchs in Russia, China, the US, and Hong Kong is less effective than targeting the small, secretive network of financial experts who manage that wealth on behalf of the oligarchs. That’s because sanctioning just one wealth manager effectively takes out several oligarchs at once, per the authors.

DOI: PLoS ONE, 2025. 10.1371/journal.pone.0326228  (About DOIs).

Medieval remedies similar to TikTok trends

Medieval manuscripts like the Cotton MS Vitellius C III highlight uses for herbs that reflect modern-day wellness trends.

Credit: The British Library

The Middle Ages are stereotypically described as the “Dark Ages,” with a culture driven by superstition—including its medical practices. But a perusal of the hundreds of medical manuscripts collected in the online Corpus of Early Medieval Latin Medicine (CEMLM) reveals that in many respects, medical practices were much more sophisticated; some of the remedies are not much different from alternative medicine remedies touted by TikTok influencers today. That certainly doesn’t make them medically sound, but it does suggest we should perhaps not be too hasty in who we choose to call backward and superstitious.

Per Binghamton University historian Meg Leja, medievalists were not “anti-science.” In fact, they were often quite keen on learning from the natural world. And their health practices, however dubious they might appear to us—lizard shampoo, anyone?—were largely based on the best knowledge available at the time. There are detox cleanses and topical ointments, such as crushing the stone of a peach, mixing it with rose oil, and smearing it on one’s forehead to relieve migraine pain. (Rose oil may actually be an effective migraine pain reliever.) The collection is well worth perusing; pair it with the Wellcome-funded Curious Cures in Cambridge Libraries to learn even more about medieval medical recipes.

Physics of tacking a sailboat

The Courant Institute's Christiana Mavroyiakoumou, above at Central Park's Conservatory Water with model sailboats

Credit: Jonathan King/NYU

Possibly the most challenging basic move for beginner sailors is learning how to tack to sail upwind. Done correctly, the sail will flip around into a mirror image of its previous shape. And in competitive sailboat racing, a bad tack can lose the race. So physicists at the University of Michigan decided to investigate the complex fluid dynamics at play to shed more light on the tricky maneuver, according to a paper published in the journal Physical Review Fluids.

After modeling the maneuver and conducting numerical simulations, the physicists concluded that there are three primary factors that determine a successful tack: the stiffness of the sail, its tension before the wind hits, and the final sail angle in relation to the direction of the wind. Ideally, one wants a less flexible, less curved sail with high tension prior to hitting the wind and to end up with a 20-degree final sail angle. Other findings: It’s harder to flip a slack sail when tacking, and how fast one manages to flip the sail depends on the sail’s mass and the speed and acceleration of the turn.

DOI: Physical Review Fluids, 2025. 10.1103/37xg-vcff  (About DOIs).

Elephant ears inspire building design

African bush elephant with ears spread in a threat or attentive position and visible blood vessels

Maintaining a comfortable indoor temperature constitutes the largest fraction of energy usage for most buildings, with the surfaces of walls, windows, and ceilings contributing to roughly 63 percent of energy loss. Engineers at Drexel University have figured out how to make surfaces that help rather than hamper efforts to maintain indoor temperatures: using so-called phase-change materials that can absorb and release thermal energy as needed as they shift between liquid and solid states. They described the breakthrough in a paper published in the Journal of Building Engineering.

The Drexel group previously developed a self-warming concrete using a paraffin-based material, similar to the stuff used to make candles. The trick this time around, they found, was to create the equivalent of a vascular network within cement-based building materials. They used a printed polymer matrix to create a grid of channels in the surface of concrete and filled those channels with the same paraffin-based material. When temperatures drop, the material turns into a solid and releases heat energy; as temperatures rise, it shifts its phase to a liquid and absorbs heat energy.

The group tested several different configurations and found that the most effective combination of strength and thermal regulation was realized with a diamond-shaped grid, which boasted the most vasculature surface area. This configuration successfully slowed the cooling and heating of its surface to between 1 and 1.2 degrees Celsius per hour, while holding up against stretching and compression tests. The structure is similar to that of jackrabbit and elephant ears, which have extensive vascular networks to help regulate body temperature.

DOI: Journal of Building Engineering, 2025. 10.1016/j.jobe.2025.112878  (About DOIs).

ID-ing a century-old museum specimen

Neotype of Palaeocampa anthrax from the Mazon Creek Lagerstätte and rediscovered in the Invertebrate Paleontology collection of the MCZ.

Credit: Richard J. Knecht

Natural history museums have lots of old specimens in storage, and revisiting those specimens can sometimes lead to new discoveries. That’s what happened to University of Michigan evolutionary biologist Richard J. Knecht as he was poring over a collection at Harvard’s Museum of Comparative Zoology while a grad student there. One of the fossils, originally discovered in 1865, was labeled a millipede. But Knecht immediately recognized it as a type of lobopod, according to a paper published in the journal Communications Biology. It’s the earliest lobopod yet found, and this particular species also marks an evolutionary leap since it’s the first known lobopod to be non-marine.

Lobopods are the evolutionary ancestors to arthropods (insects, spiders, and crustaceans), and their fossils are common along Paleozoic sea beds. Apart from tardigrades and velvet worms, however, they were thought to be confined to oceans. But Palaeocampa anthrax has legs on every trunk, as well as almost 1,000 bristly spines covering its body with orange halos at their tips. Infrared spectroscopy revealed traces of fossilized molecules—likely a chemical that emanated from the spinal tips. Since any chemical defense would just disperse in water, limiting its effectiveness, Knecht concluded that Palaeocampa anthrax was most likely amphibious rather than being solely aquatic.

DOI: Communications Biology, 2025. 10.1038/s42003-025-08483-0  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 cool science stories we almost missed Read More »

peacock-feathers-can-emit-laser-beams

Peacock feathers can emit laser beams

Peacock feathers are greatly admired for their bright iridescent colors, but it turns out they can also emit laser light when dyed multiple times, according to a paper published in the journal Scientific Reports. Per the authors, it’s the first example of a biolaser cavity within the animal kingdom.

As previously reported, the bright iridescent colors in things like peacock feathers and butterfly wings don’t come from any pigment molecules but from how they are structured. The scales of chitin (a polysaccharide common to insects) in butterfly wings, for example, are arranged like roof tiles. Essentially, they form a diffraction grating, except photonic crystals only produce certain colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum, much like a prism.

In the case of peacock feathers, it’s the regular, periodic nanostructures of the barbules—fiber-like components composed of ordered melanin rods coated in keratin—that produce the iridescent colors. Different colors correspond to different spacing of the barbules.

Both are naturally occurring examples of what physicists call photonic crystals. Also known as photonic bandgap materials, photonic crystals are “tunable,” which means they are precisely ordered in such a way as to block certain wavelengths of light while letting others through. Alter the structure by changing the size of the tiles, and the crystals become sensitive to a different wavelength. (In fact, the rainbow weevil can control both the size of its scales and how much chitin is used to fine-tune those colors as needed.)

Even better (from an applications standpoint), the perception of color doesn’t depend on the viewing angle. And the scales are not just for aesthetics; they help shield the insect from the elements. There are several types of manmade photonic crystals, but gaining a better and more detailed understanding of how these structures grow in nature could help scientists design new materials with similar qualities, such as iridescent windows, self-cleaning surfaces for cars and buildings, or even waterproof textiles. Paper currency could incorporate encrypted iridescent patterns to foil counterfeiters.

Peacock feathers can emit laser beams Read More »

merger-of-two-massive-black-holes-is-one-for-the-record-books

Merger of two massive black holes is one for the record books

Physicists with the LIGO/Virgo/KAGRA collaboration have detected the gravitational wave signal (dubbed GW231123) of the most massive merger between two black holes yet observed, resulting in a new black hole that is 225 times more massive than our Sun. The results were presented at the Edoardo Amaldi Conference on Gravitational Waves in Glasgow, Scotland.

The LIGO/Virgo/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington, and in Livingston, Louisiana. A third detector in Italy, Advanced Virgo, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars.  In 2021, LIGO/Virgo/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

A tour of Virgo. Credit: EGO-Virgo

LIGO/Virgo/KAGRA started its fourth observing run in 2023, and by the following year had announced the detection of a signal indicating a merger between two compact objects, one of which was most likely a neutron star. The other had an intermediate mass—heavier than a neutron star and lighter than a black hole. It was the first gravitational-wave detection of a mass-gap object paired with a neutron star and hinted that the mass gap might be less empty than astronomers previously thought.

Merger of two massive black holes is one for the record books Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

ibm-now-describing-its-first-error-resistant-quantum-compute-system

IBM now describing its first error-resistant quantum compute system


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a group of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, is relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the chip’s design commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable crosstalk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near-term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM now describing its first error-resistant quantum compute system Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

A bit over a year ago, Nord Quantique used a similar setup to show that it could be used to identify the most common form of error in these devices, one in which the system loses one of its photons. “We can store multiple microwave photons into each of these cavities, and the fact that we have redundancy in the system comes exactly from this,” said Nord Quantique’s CTO, Julien Camirand Lemyre. However, this system was unable to handle many of the less common errors that might also occur.

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors—something the team didn’t try—would be able to fix all the detected problems.

Startup puts a logical qubit in a single piece of hardware Read More »