Physics

the-physics-of-ugly-christmas-sweaters

The physics of ugly Christmas sweaters

In 2018, a team of French physicists developed a rudimentary mathematical model to describe the deformation of a common type of knit. Their work was inspired when co-author Frédéric Lechenault watched his pregnant wife knitting baby booties and blankets, and he noted how the items would return to their original shape even after being stretched. With a few colleagues, he was able to boil the mechanics down to a few simple equations, adaptable to different stitch patterns. It all comes down to three factors: the “bendiness” of the yarn, the length of the yarn, and how many crossing points are in each stitch.

A simpler stitch

A simplified model of how yarns interact

A simplified model of how yarns interact Credit: J. Crassous/University of Rennes

One of the co-authors of that 2018 paper, Samuel Poincloux of Aoyama Gakuin University in Japan, also co-authored this latest study with two other colleagues, Jérôme Crassous (University of Rennes in France) and Audrey Steinberger (University of Lyon). This time around, Poincloux was interested in the knotty problem of predicting the rest shape of a knitted fabric, given the yarn’s length by stitch—an open question dating back at least to a 1959 paper.

It’s the complex geometry of all the friction-producing contact zones between the slender elastic fibers that makes such a system too difficult to model precisely, because the contact zones can rotate or change shape as the fabric moves. Poincloux and his cohorts came up with their own more simplified model.

The team performed experiments with a Jersey stitch knit (aka a stockinette), a widely used and simple knit consisting of a single yarn (in this case, a nylon thread) forming interlocked loops. They also ran numerical simulations modeled on discrete elastic rods coupled with dry contacts with a specific friction coefficient to form meshes.

The results: Even when there were no external stresses applied to the fabric, the friction between the threads served as a stabilizing factor. And there was no single form of equilibrium for a knitted sweater’s resting shape; rather, there were multiple metastable states that were dependent on the fabric’s history—the different ways it had been folded, stretched, or rumpled. In short, “Knitted fabrics do not have a unique shape when no forces are applied, contrary to the relatively common belief in textile literature,” said Crassous.

DOI: Physical Review Letters, 2024. 10.1103/PhysRevLett.133.248201 (About DOIs).

The physics of ugly Christmas sweaters Read More »

could-microwaved-grapes-be-used-for-quantum-sensing?

Could microwaved grapes be used for quantum sensing?

The microwaved grape trick also shows their promise as alternative microwave resonators for quantum sensing applications, according to the authors of this latest paper. Those applications include satellite technology, masers, microwave photon detection, hunting for axions (a dark matter candidate), and various quantum systems, and driving spin in superconducting qubits for quantum computing, among others.

Prior research had specifically investigated the electrical fields behind the plasma effect. “We showed that grape pairs can also enhance magnetic fields which are crucial for quantum sensing applications,” said co-author Ali Fawaz, a graduate student at Macquarie University.

Fawaz and co-authors used specially fabricated nanodiamonds for their experiments. Unlike pure diamonds, which are colorless, some of the carbon atoms in the nanodiamonds were replaced, creating tiny defect centers that act like tiny magnets, making them ideal for quantum sensing. Sapphires are typically used for this purpose, but Fawaz et al. realized that water conducts microwave energy better than sapphires—and grapes are mostly water.

So the team placed a nanodiamond atop a thin glass fiber and placed it between two grapes. Then they shone green laser light through the fiber, making the defect centers glow red. Measuring the brightness told them the strength of the magnetic field around the grapes, which turned out to be twice as strong with grapes than without.

The size and shape of the grapes used in the experiments proved crucial; they must be about 27 millimeters long to get concentrated microwave energy at just the right frequency for the quantum sensor. The biggest catch is that using the grapes proved to be less stable with more energy loss. Future research may identify more reliable potential materials to achieve a similar effect.

DOI: Physical Review Applied, 2024. 10.1103/PhysRevApplied.22.064078 (About DOIs).

Could microwaved grapes be used for quantum sensing? Read More »

google-gets-an-error-corrected-quantum-bit-to-be-stable-for-an-hour

Google gets an error-corrected quantum bit to be stable for an hour


Using almost the entire chip for a logical qubit provides long-term stability.

Google’s new Willow chip is its first new generation of chips in about five years. Credit: Google

On Monday, Nature released a paper from Google’s quantum computing team that provides a key demonstration of the potential of quantum error correction. Thanks to an improved processor, Google’s team found that increasing the number of hardware qubits dedicated to an error-corrected logical qubit led to an exponential increase in performance. By the time the entire 105-qubit processor was dedicated to hosting a single error-corrected qubit, the system was stable for an average of an hour.

In fact, Google told Ars that errors on this single logical qubit were rare enough that it was difficult to study them. The work provides a significant validation that quantum error correction is likely to be capable of supporting the execution of complex algorithms that might require hours to execute.

A new fab

Google is making a number of announcements in association with the paper’s release (an earlier version of the paper has been up on the arXiv since August). One of those is that the company is committed enough to its quantum computing efforts that it has built its own fabrication facility for its superconducting processors.

“In the past, all the Sycamore devices that you’ve heard about were fabricated in a shared university clean room space next to graduate students and people doing kinds of crazy stuff,” Google’s Julian Kelly said. “And we’ve made this really significant investment in bringing this new facility online, hiring staff, filling it with tools, transferring their process over. And that enables us to have significantly more process control and dedicated tooling.”

That’s likely to be a critical step for the company, as the ability to fabricate smaller test devices can allow the exploration of lots of ideas on how to structure the hardware to limit the impact of noise. The first publicly announced product of this lab is the Willow processor, Google’s second design, which ups its qubit count to 105. Kelly said one of the changes that came with Willow actually involved making the individual pieces of the qubit larger, which makes them somewhat less susceptible to the influence of noise.

All of that led to a lower error rate, which was critical for the work done in the new paper. This was demonstrated by running Google’s favorite benchmark, one that it acknowledges is contrived in a way to make quantum computing look as good as possible. Still, people have figured out how to make algorithm improvements for classical computers that have kept them mostly competitive. But, with all the improvements, Google expects that the quantum hardware has moved firmly into the lead. “We think that the classical side will never outperform quantum in this benchmark because we’re now looking at something on our new chip that takes under five minutes, would take 1025 years, which is way longer than the age of the Universe,” Kelly said.

Building logical qubits

The work focuses on the behavior of logical qubits, in which a collection of individual hardware qubits are grouped together in a way that enables errors to be detected and corrected. These are going to be essential for running any complex algorithms, since the hardware itself experiences errors often enough to make some inevitable during any complex calculations.

This naturally creates a key milestone. You can get better error correction by adding more hardware qubits to each logical qubit. If each of those hardware qubits produces errors at a sufficient rate, however, then you’ll experience errors faster than you can correct for them. You need to get hardware qubits of a sufficient quality before you start benefitting from larger logical qubits. Google’s earlier hardware had made it past that milestone, but only barely. Adding more hardware qubits to each logical qubit only made for a marginal improvement.

That’s no longer the case. Google’s processors have the hardware qubits laid out on a square grid, with each connected to its nearest neighbors (typically four except at the edges of the grid). And there’s a specific error correction code structure, called the surface code, that fits neatly into this grid. And you can use surface codes of different sizes by using progressively more of the grid. The size of the grid being used is measured by a term called distance, with larger distance meaning a bigger logical qubit, and thus better error correction.

(In addition to a standard surface code, Google includes a few qubits that handle a phenomenon called “leakage,” where a qubit ends up in a higher-energy state, instead of the two low-energy states defined as zero and one.)

The key result is that going from a distance of three to a distance of five more than doubled the ability of the system to catch and correct errors. Going from a distance of five to a distance of seven doubled it again. Which shows that the hardware qubits have reached a sufficient quality that putting more of them into a logical qubit has an exponential effect.

“As we increase the grid from three by three to five by five to seven by seven, the error rate is going down by a factor of two each time,” said Google’s Michael Newman. “And that’s that exponential error suppression that we want.”

Going big

The second thing they demonstrated is that, if you make the largest logical qubit that the hardware can support, with a distance of 15, it’s possible to hang onto the quantum information for an average of an hour. This is striking because Google’s earlier work had found that its processors experience widespread simultaneous errors that the team ascribed to cosmic ray impacts. (IBM, however, has indicated it doesn’t see anything similar, so it’s not clear whether this diagnosis is correct.) Those happened every 10 seconds or so. But this work shows that a sufficiently large error code can correct for these events, whatever their cause.

That said, these qubits don’t survive indefinitely. One of them seems to be a localized temporary increase in errors. The second, more difficult to deal with problem involves a widespread spike in error detection affecting an area that includes roughly 30 qubits. At this point, however, Google has only seen six of these events, so they told Ars that it’s difficult to really characterize them. “It’s so rare it actually starts to become a bit challenging to study because you have to gain a lot of statistics to even see those events at all,” said Kelly.

Beyond the relative durability of these logical qubits, the paper notes another advantage to going with larger code distances: it enhances the impact of further hardware improvements. Google estimates that at a distance of 15, improving hardware performance by a factor of two would drop errors in the logical qubit by a factor of 250. At a distance of 27, the same hardware improvement would lead to an improvement of over 10,000 in the logical qubit’s performance.

Note that none of this will ever get the error rate to zero. Instead, we just need to get the error rate to a level where an error is unlikely for a given calculation (more complex calculations will require a lower error rate). “It’s worth understanding that there’s always going to be some type of error floor and you just have to push it low enough to the point where it practically is irrelevant,” Kelly said. “So for example, we could get hit by an asteroid and the entire Earth could explode and that would be a correlated error that our quantum computer is not currently built to be robust to.”

Obviously, a lot of additional work will need to be done to both make logical qubits like this survive for even longer, and to ensure we have the hardware to host enough logical qubits to perform calculations. But the exponential improvements here, to Google, suggest that there’s nothing obvious standing in the way of that. “We woke up one morning and we kind of got these results and we were like, wow, this is going to work,” Newman said. “This is really it.”

Nature, 2024. DOI: 10.1038/s41586-024-08449-y  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google gets an error-corrected quantum bit to be stable for an hour Read More »

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

microsoft-and-atom-computing-combine-for-quantum-error-correction-demo

Microsoft and Atom Computing combine for quantum error correction demo


New work provides a good view of where the field currently stands.

The first-generation tech demo of Atom’s hardware. Things have progressed considerably since. Credit: Atom Computing

In September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.

Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing’s hardware. It didn’t take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.

Atoms and errors

While we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000—something we’re unlikely to ever be able to do.

The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it’s too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.

We’re now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.

Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don’t have to manage the device-to-device variability that’s inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom’s nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.

Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.

That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error.

Image of a grid of dots arranged in sets of parallel vertical rows. There is a red bar across the top, and a green bar near the bottom of the grid.

Atom Computing’s system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al.

The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it’s possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.

Atom’s hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It’s also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.

It’s only logical

As a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.

For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.

The research team did two main demonstrations. One was placing 24 of these logical qubits into what’s called a cat state, named after Schrödinger’s hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what’s called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.

Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn’t eliminate errors, as it’s possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.

Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you’d end up wanting to discard everything. Which is why we’ll ultimately need to correct the errors.

In these experiments, however, the process of correcting the error—taking an entirely new atom and setting it into the appropriate state—was also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.

In the end, the current hardware has an error rate that’s good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that’s not just true for Atom’s hardware; similar things can be said for other error-correction demonstrations done on different machines.

There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We’re obviously going to need to do both, and Atom’s partnership with Microsoft was formed in the hope that it will help both companies get there faster.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft and Atom Computing combine for quantum error correction demo Read More »

scientist-behind-superconductivity-claims-ousted

Scientist behind superconductivity claims ousted

University of Rochester physicist Ranga Dias made headlines with his controversial claims of high-temperature superconductivity—and made headlines again when the two papers reporting the breakthroughs were later retracted under suspicion of scientific misconduct, although Dias denied any wrongdoing. The university conducted a formal investigation over the past year and has now terminated Dias’ employment, The Wall Street Journal reported.

“In the past year, the university completed a fair and thorough investigation—conducted by a panel of nationally and internationally known physicists—into data reliability concerns within several retracted papers in which Dias served as a senior and corresponding author,” a spokesperson for the University of Rochester said in a statement to the WSJ, confirming his termination. “The final report concluded that he engaged in research misconduct while a faculty member here.”

The spokesperson declined to elaborate further on the details of his departure, and Dias did not respond to the WSJ’s request for comment. Dias did not have tenure, so the final decision rested with the Board of Trustees after a recommendation from university President Sarah Mangelsdorf. Mangelsdorf had called for terminating his position in an August letter to the chair and vice chair of the Board of Trustees, so the decision should not come as a surprise. Dias’ lawsuit claiming that the investigation was biased was dismissed by a judge in April.

Ars has been following this story ever since Dias first burst onto the scene with reports of a high-pressure, room-temperature superconductor, published in Nature in 2020. Even as that paper was being retracted due to concerns about the validity of some of its data, Dias published a second paper in Nature claiming a similar breakthrough: a superconductor that works at high temperatures but somewhat lower pressures. Shortly afterward, that paper was retracted as well. As Ars Science Editor John Timmer reported previously:

Dias’ lab was focused on high-pressure superconductivity. At extreme pressures, the orbitals where electrons hang out get distorted, which can alter the chemistry and electronic properties of materials. This can mean the formation of chemical compounds that don’t exist at normal pressures, along with distinct conductivity. In a number of cases, these changes enabled superconductivity at unusually high temperatures, although still well below the freezing point of water.

Dias, however, supposedly found a combination of chemicals that would boost the transition to superconductivity to near room temperature, although only at extreme pressures. While the results were plausible, the details regarding how some of the data was processed to produce one of the paper’s key graphs were lacking, and Dias didn’t provide a clear explanation.

The ensuing investigation cleared Dias of misconduct for that first paper. Then came the second paper, which reported another high-temperature superconductor forming at less extreme pressures. However, potential problems soon became apparent, with many of the authors calling for its retraction, although Dias did not.

Scientist behind superconductivity claims ousted Read More »

ibm-boosts-the-amount-of-computation-you-can-get-done-on-quantum-hardware

IBM boosts the amount of computation you can get done on quantum hardware

By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.

Since people are paying for time on this hardware, that’s good for customers now. However,  it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.

Deeper computations

Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:

“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”

The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”

IBM boosts the amount of computation you can get done on quantum hardware Read More »

what-makes-baseball’s-“magic-mud”-so-special?

What makes baseball’s “magic mud” so special?

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball.

Credit: S. Pradeep et al., 2024

“Magic mud” composition and microstructure: (top right) a clean baseball surface; (bottom right) a mudded baseball. Credit: S. Pradeep et al., 2024

Pradeep et al. found that magic mud’s particles are primarily silt and clay, with a bit of sand and organic material. The stickiness comes from the clay, silt, and organic matter, while the sand makes it gritty. So the mud “has the properties of skin cream,” they wrote. “This allows it to be held in the hand like a solid but also spread easily to penetrate pores and make a very thin coating on the baseball.”

When the mud dries on the baseball, however, the residue left behind is not like skin cream. That’s due to the angular sand particles bonded to the baseball by the clay, which can increase surface friction by as much as a factor of two. Meanwhile, the finer particles double the adhesion. “The relative proportions of cohesive particulates, frictional sand, and water conspire to make a material that flows like skin cream but grips like sandpaper,” they wrote.

Despite its relatively mundane components, the magic mud nonetheless shows remarkable mechanical behaviors that the authors think would make it useful in other practical applications. For instance, it might replace synthetic materials as an effective lubricant, provided the gritty sand particles are removed. Or it could be used as a friction agent to improve traction on slippery surfaces, provided one could define the optimal fraction of sand content that wouldn’t diminish its spreadability. Or it might be used as a binding agent in locally sourced geomaterials for construction.

“As for the future of Rubbing Mud in Major League Baseball, unraveling the mystery of its behavior does not and should not necessarily lead to a synthetic replacement,” the authors concluded. “We rather believe the opposite; Rubbing Mud is a nature-based material that is replenished by the tides, and only small quantities are needed for great effect. In a world that is turning toward green solutions, this seemingly antiquated baseball tradition provides a glimpse of a future of Earth-inspired materials science.”

DOI: PNAS, 2024. 10.1073/pnas.241351412  (About DOIs).

What makes baseball’s “magic mud” so special? Read More »

for-the-strongest-disc-golf-throws,-it’s-all-in-the-thumbs

For the strongest disc golf throws, it’s all in the thumbs

When Zachary Lindsey, a physicist at Berry College in Georgia, decided to run an experiment on how to get the best speed and torque while playing disc golf (aka Frisbee golf), he had no trouble recruiting 24 eager participants keen on finding science-based tips on how to improve their game. Lindsey and his team determined the optimal thumb distance from the center of the disc to increase launch speed and distance, according to a new paper published in the journal AIP Advances.

Disc golf first emerged in the 1960s, but “Steady” Ed Hendrick, inventor of the modern Frisbee, is widely considered the “father” of the sport since it was he who coined and trademarked the name “disc golf” in 1975. He and his son founded their own company to manufacture the equipment used in the game. As of 2023, the Professional Disc Golf Association (PDGA) had over 107,000 registered members worldwide, with players hailing from 40 countries.

A disc golf course typically has either nine or 18 holes or targets, called “baskets.” There is a tee position for starting play, and players take turns throwing discs until they catch them in the basket, similar to how golfers work toward sinking a golf ball into a hole. The expected number of throws required of an experienced player to make the basket is considered “par.”

There are essentially three different disc types: drivers, mid-rangers, and putters. Driver discs are thin and sharp-edged, designed to reduce drag for long throws; they’re typically used for teeing off or other long-distance throws since a strong throw can cover as much as 500 feet. Putter discs, as the name implies, are better for playing close to the basket since they are thicker and thus have higher drag when in flight. Mid-range discs have elements of both drivers and putters, designed for distances of 200–300 feet—i.e., approaching the basket—where players want to optimize range and accuracy.

For the strongest disc golf throws, it’s all in the thumbs Read More »

these-3d-printed-pipes-inspired-by-shark-intestines-outperform-tesla-valves

These 3D-printed pipes inspired by shark intestines outperform Tesla valves

“You don’t get to beat Tesla every day” —

Prototypes control fluid flow in a preferred direction with no need for moving parts.

some of the research team’s 3D-printed pipes alongside a plastic toy shark.

Enlarge / Shark intestines are naturally occurring Tesla valves; scientists have figured out how to mimic their unique structure.

Sarah L. Keller/University of Washington

Scientists at the University of Washington have re-created the distinctive spiral shapes of shark intestines in 3D-printed pipes in order to study the unique fluid flow inside the spirals. Their prototypes kept fluids flowing in one preferred direction with no need for flaps to control that flow and performed significantly better than so-called “Tesla valves,” particularly when made of soft polymers, according to a new paper published in the Proceedings of the National Academy of Sciences.

As we’ve reported previously, in 1920, Serbian-born inventor Nikola Tesla designed and patented what he called a “valvular conduit“: a pipe whose internal design ensures that fluid will flow in one preferred direction, with no need for moving parts, making it ideal for microfluidics applications, among other uses. The key to Tesla’s ingenious valve design is a set of interconnected, asymmetric, tear-shaped loops.

In his patent application, Tesla described this series of 11 flow-control segments as being made of “enlargements, recessions, projections, baffles, or buckets which, while offering virtually no resistance to the passage of fluid in one direction, other than surface friction, constitute an almost impassable barrier to its flow in the opposite direction.” And because it achieves this with no moving parts, a Tesla valve is much more resistant to the wear and tear of frequent operation.

Tesla claimed that water would flow through his valve 200 times slower in one direction than another, which may have been an exaggeration. A team of scientists at New York University built a working Tesla valve in 2021, in accordance with the inventor’s design, and tested that claim by measuring the flow of water through the valve in both directions at various pressures. The scientists found the water only flowed about two times slower in the nonpreferred direction.

Flow rate proved to be a critical factor. The valve offered very little resistance at slow flow rates, but once that rate increased above a certain threshold, the valve’s resistance would increase as well, generating turbulent flows in the reverse direction, thereby “plugging” the pipe with vortices and disruptive currents. So it actually works more like a switch and can also help smooth out pulsing flows, akin to how AC/DC converters turn alternating currents into direct currents. That may even have been Tesla’s original intent in designing the valve, given that his biggest claim to fame is inventing both the AC motor and an AC/DC converter.

It helps to be a shark

Different kinds of sharks have intestines with different spiral patterns that favor fluid flow in one direction.

Enlarge / Different kinds of sharks have intestines with different spiral patterns that favor fluid flow in one direction.

Ido Levin

The Tesla valve also provides a useful model for how food moves through the digestive system of many species of shark. In 2020, Japanese researchers reconstructed micrographs of histological sections from a species of catshark into a three-dimensional model, offering a tantalizing glimpse of the anatomy of a scroll-type spiral intestine. The following year, scientists took CT scans of shark intestines and concluded that the intestines are naturally occurring Tesla valves.

That’s where the work of UW postdoc Ido Levin and his co-authors comes in. They had questions about the 2021 research in particular. “Flow asymmetry in a pipe with no moving flaps has tremendous technological potential, but the mechanism was puzzling,” said Levin. “It was not clear which parts of the shark’s intestinal structure contributed to the asymmetry and which served only to increase the surface area for nutrient uptake.”

Levin et al. 3D-printed several pipes with an internal helical structure mimicking that of shark intestines, varying certain geometrical parameters like the number of turns or the pitch angle of the helix. It was admittedly an idealized structure, so the team was delighted when the first batch, made from rigid materials, produced the hoped-for flow asymmetry. After further fine-tuning of the parameters, the rigid printed pipes produced flow asymmetries that matched or exceeded Tesla valves.

Eight of the team’s 3D-printed prototypes with various interior helices.

Enlarge / Eight of the team’s 3D-printed prototypes with various interior helices.

Ido Levin/University of Washington

But the researchers weren’t done yet. “[Prior work] showed that if you connect these intestines in the same direction as a digestive tract, you get a faster flow of fluid than if you connect them the other way around. We thought this was very interesting from a physics perspective,” said Levin last year while presenting preliminary results at the 67th Annual Biophysical Society Meeting. “One of the theorems in physics actually states that if you take a pipe, and you flow fluid very slowly through it, you have the same flow if you invert it. So we were very surprised to see experiments that contradict the theory. But then you remember that the intestines are not made out of steel—they’re made of something soft, so while fluid flows through the pipe, it deforms it.”

That gave Levin et al. the idea to try making their pipes out of soft deformable polymers—the softest commercially available ones that could also be used for 3D printing. That batch of pipes performed seven times better on flow asymmetry than any prior measurements of Tesla valves. And since actual shark intestines are about 100 times softer than the polymers they used, the team thinks they can achieve even better performance, perhaps with hydrogels when they become more widely available as 3D printing continues to evolve. The biggest challenge, per the authors, is finding soft materials that can withstand high deformations.

Finally, because the pipes are three-dimensional, they can accommodate larger fluid volumes, opening up applications in larger commercial devices. “Chemists were already motivated to develop polymers that are simultaneously soft, strong and printable,” said co-author Alshakim Nelson, whose expertise lies in developing new types of polymers. “The potential use of these polymers to control flow in applications ranging from engineering to medicine strengthens that motivation.”

PNAS, 2024. DOI: 10.1073/pnas.2406481121 (About DOIs).

These 3D-printed pipes inspired by shark intestines outperform Tesla valves Read More »

metal-bats-have-pluses-for-young-players,-but-in-the-end-it-comes-down-to-skill

Metal bats have pluses for young players, but in the end it comes down to skill

four different kinds of wood and metal bats laid flat on baseball diamond grass

Enlarge / Washington State University scientists conducted batting cage tests of wood and metal bats with young players.

There’s long been a debate in baseball circles about the respective benefits and drawbacks of using wood bats versus metal bats. However, there are relatively few scientific studies on the topic that focus specifically on young athletes, who are most likely to use metal bats. Scientists at Washington State University (WSU) conducted their own tests of wood and metal bats with young players. They found that while there are indeed performance differences between wooden and metal bats, a batter’s skill is still the biggest factor affecting how fast the ball comes off the bat, according to a new paper published in the Journal of Sports Engineering and Technology.

According to physicist and acoustician Daniel Russell of Penn State University—who was not involved in the study but has a long-standing interest in the physics of baseball ever since his faculty days at Kettering University in Michigan—metal bats were first introduced in 1974 and soon dominated NCAA college baseball, youth baseball, and adult amateur softball. Those programs liked the metal bats because they were less likely to break than traditional wooden bats, reducing costs.

Players liked them because it can be easier to control metal bats and swing faster, as the center of mass is closer to the balance point in the bat’s handle, resulting in a lower moment of inertia (or “swing weight”). A faster swing doesn’t mean that a hit ball will travel faster, however, since the lower moment of inertia is countered by a decreased collision efficiency. Metal bats are also more forgiving if players happen to hit the ball away from the proverbial “sweet spot” of the bat. (The definition of the sweet spot is a bit fuzzy because it is sometimes defined in different ways, but it’s commonly understood to be the area on the bat’s barrel that results in the highest batted ball speeds.)

“There’s more of a penalty when you’re not on the sweet spot with wood bats than with the other metal bats,” said Lloyd Smith, director of WSU’s Sport Science Laboratory and a co-author of the latest study. “[And] wood is still heavy. Part of baseball is hitting the ball far, but the other part is just hitting the ball. If you have a heavy bat, you’re going to have a harder time making contact because it’s harder to control.”

Metal bats may also improve performance via a kind of “trampoline effect.” Metal bats are hollow, while wood bats are solid. When a ball hits a wood bat, the bat barrel compresses by as much as 75 percent, such that internal friction forces decrease the initial energy by as much as 75 percent. A metal bat barrel behaves more like a spring when it compresses in response to a ball’s impact, so there is much less energy loss. Based on his own research back in 2004, Russell has found that improved performance of metal bats is linked to the frequency of the barrel’s mode of vibration, aka the “hoop mode.” (Bats with the lowest hoop frequency will have the highest performance.)

Metal bats have pluses for young players, but in the end it comes down to skill Read More »

hydrogels-can-learn-to-play-pong

Hydrogels can learn to play Pong

It’s all about the feedback loops —

Work could lead to new “smart” materials that can learn and adapt to their environment.

This electroactive polymer hydrogel “learned” to play Pong. Credit: Cell Reports Physical Science/Strong et al.

Pong will always hold a special place in the history of gaming as one of the earliest arcade video games. Introduced in 1972, it was a table tennis game featuring very simple graphics and gameplay. In fact, it’s simple enough that even non-living materials known as hydrogels can “learn” to play the game by “remembering” previous patterns of electrical stimulation, according to a new paper published in the journal Cell Reports Physical Science.

“Our research shows that even very simple materials can exhibit complex, adaptive behaviors typically associated with living systems or sophisticated AI,” said co-author Yoshikatsu Hayashi, a biomedical engineer at the University of Reading in the UK. “This opens up exciting possibilities for developing new types of ‘smart’ materials that can learn and adapt to their environment.”

Hydrogels are soft, flexible biphasic materials that swell but do not dissolve in water. So a hydrogel may contain a large amount of water but still maintain its shape, making it useful for a wide range of applications. Perhaps the best-known use is soft contact lenses, but various kinds of hydrogels are also used in breast implants, disposable diapers, EEG and ECG medical electrodes, glucose biosensors, encapsulating quantum dots, solar-powered water purification, cell cultures, tissue engineering scaffolds, water gel explosives, actuators for soft robotics, supersonic shock-absorbing materials, and sustained-release drug delivery systems, among other uses.

In April, Hayashi co-authored a paper showing that hydrogels can “learn” to beat in rhythm with an external pacemaker, something previously only achieved with living cells. They exploited the intrinsic ability of the hydrogels to convert chemical energy into mechanical oscillations, using the pacemaker to apply cyclic compressions. They found that when the oscillation of a gel sample matched the harmonic resonance of the pacemaker’s beat, the system kept a “memory” of that resonant oscillation period and could retain that memory even when the pacemaker was turned off. Such hydrogels might one day be a useful substitute for heart research using animals, providing new ways to research conditions like cardiac arrhythmia.

For this latest work, Hayashi and co-authors were partly inspired by a 2022 study in which brain cells in a dish—dubbed DishBrain—were electrically stimulated in such a way as to create useful feedback loops, enabling them to “learn” to play Pong (albeit badly). As Ars Science Editor John Timmer reported at the time:

Pong proved to be an excellent choice for the experiments. The environment only involves a couple of variables: the location of the paddle and the location of the ball. The paddle can only move along a single line, so the motor portion of things only needs two inputs: move up or move down. And there’s a clear reward for doing things well: you avoid an end state where the ball goes past the paddles and the game stops. It is a great setup for testing a simple neural network.

Put in Pong terms, the sensory portion of the network will take the positional inputs, determine an action (move the paddle up or down), and then generate an expectation for what the next state will be. If it’s interpreting the world correctly, that state will be similar to its prediction, and thus the sensory input will be its own reward. If it gets things wrong, then there will be a large mismatch, and the network will revise its connections and try again.

There were a few caveats—even the best systems didn’t play Pong all that well—but the approach mostly worked. Those systems comprising either mouse or human neurons saw the average length of Pong rallies increase over time, indicating they might be learning the game’s rules. Systems based on non-neural cells, or those lacking a reward system, didn’t see this sort of improvement. The findings provided some evidence that neural networks formed from actual neurons spontaneously develop the ability to learn. And that could explain some of the learning capabilities of actual brains, where smaller groups of neurons are organized into functional units.

Hydrogels can learn to play Pong Read More »