Science

nasa-defers-decision-on-mars-sample-return-to-the-trump-administration

NASA defers decision on Mars Sample Return to the Trump administration


“We want to have the quickest, cheapest way to get these 30 samples back.”

This photo montage shows sample tubes shortly after they were deposited onto the surface by NASA’s Perseverance Mars rover in late 2022 and early 2023. Credit: NASA/JPL-Caltech/MSSS

For nearly four years, NASA’s Perseverance rover has journeyed across an unexplored patch of land on Mars—once home to an ancient river delta—and collected a slew of rock samples sealed inside cigar-sized titanium tubes.

These tubes might contain tantalizing clues about past life on Mars, but NASA’s ever-changing plans to bring them back to Earth are still unclear.

On Tuesday, NASA officials presented two options for retrieving and returning the samples gathered by the Perseverance rover. One alternative involves a conventional architecture reminiscent of past NASA Mars missions, relying on the “sky crane” landing system demonstrated on the agency’s two most recent Mars rovers. The other option would be to outsource the lander to the space industry.

NASA Administrator Bill Nelson left a final decision on a new mission architecture to the next NASA administrator working under the incoming Trump administration. President-elect Donald Trump nominated entrepreneur and commercial astronaut Jared Isaacman as the agency’s 15th administrator last month.

“This is going to be a function of the new administration in order to fund this,” said Nelson, a former Democratic senator from Florida who will step down from the top job at NASA on January 20.

The question now is: will they? And if the Trump administration moves forward with Mars Sample Return (MSR), what will it look like? Could it involve a human mission to Mars instead of a series of robotic spacecraft?

The Trump White House is expected to emphasize “results and speed” with NASA’s space programs, with the goal of accelerating a crew landing on the Moon and sending people to explore Mars.

NASA officials had an earlier plan to bring the Mars samples back to Earth, but the program slammed into a budgetary roadblock last year when an independent review team concluded the existing architecture would cost up to $11 billion—double the previous cost projectionand wouldn’t get the Mars specimens back to Earth until 2040.

This budget and schedule were non-starters for NASA. The agency tasked government labs, research institutions, and commercial companies to come up with better ideas to bring home the roughly 30 sealed sample tubes carried aboard the Perseverance rover. NASA deposited 10 sealed tubes on the surface of Mars a couple of years ago as insurance in case Perseverance dies before the arrival of a retrieval mission.

“We want to have the quickest, cheapest way to get these 30 samples back,” Nelson said.

How much for these rocks?

NASA officials said they believe a stripped-down concept proposed by the Jet Propulsion Laboratory in Southern California, which previously was in charge of the over-budget Mars Sample Return mission architecture, would cost between $6.6 billion and $7.7 billion, according to Nelson. JPL’s previous approach would have put a heavier lander onto the Martian surface, with small helicopter drones that could pick up sample tubes if there were problems with the Perseverance rover.

NASA previously deleted a “fetch rover” from the MSR architecture and instead will rely on Perseverance to hand off sample tubes to the retrieval lander.

An alternative approach would use a (presumably less expensive) commercial heavy lander, but this concept would still utilize several elements NASA would likely develop in a more traditional government-led manner: a nuclear power source, a robotic arm, a sample container, and a rocket to launch the samples off the surface of Mars and back into space. The cost range for this approach extends from $5.1 billion to $7.1 billion.

Artist’s illustration of SpaceX’s Starship approaching Mars. Credit: SpaceX

JPL will have a “key role” in both paths for MSR, said Nicky Fox, head of NASA’s science mission directorate. “To put it really bluntly, JPL is our Mars center in NASA science.”

If the Trump administration moves forward with either of the proposed MSR plans, this would be welcome news for JPL. The center, which is run by the California Institute of Technology under contract to NASA, laid off 955 employees and contractors last year, citing budget uncertainty, primarily due to the cloudy future of Mars Sample Return.

Without MSR, engineers at the Jet Propulsion Laboratory don’t have a flagship-class mission to build after the launch of NASA’s Europa Clipper spacecraft last year. The lab recently struggled with rising costs and delays with the previous iteration of MSR and NASA’s Psyche asteroid mission, and it’s not unwise to anticipate more cost overruns on a project as complex as a round-trip flight to Mars.

Ars submitted multiple requests to interview Laurie Leshin, JPL’s director, in recent months to discuss the lab’s future, but her staff declined.

Both MSR mission concepts outlined Tuesday would require multiple launches and an Earth return orbiter provided by the European Space Agency. These options would bring the Mars samples back to Earth as soon as 2035, but perhaps as late as 2039, Nelson said. The return orbiter and sample retrieval lander could launch as soon as 2030 and 2031, respectively.

“The main difference is in the landing mechanism,” Fox said.

To keep those launch schedules, Congress must immediately approve $300 million for Mars Sample Return in this year’s budget, Nelson said.

NASA officials didn’t identify any examples of a commercial heavy lander that could reach Mars, but the most obvious vehicle is SpaceX’s Starship. NASA already has a contract with SpaceX to develop a Starship vehicle that can land on the Moon, and SpaceX founder Elon Musk is aggressively pushing for a Mars mission with Starship as soon as possible.

NASA solicited eight studies from industry earlier this year. SpaceX, Blue Origin, Rocket Lab, and Lockheed Martin—each with their own lander concepts—were among the companies that won NASA study contracts. SpaceX and Blue Origin are well-capitalized with Musk and Amazon’s Jeff Bezos as owners, while Lockheed Martin is the only company to have built a lander that successfully reached Mars.

This slide from a November presentation to the Mars Exploration Program Analysis Group shows JPL’s proposed “sky crane” architecture for a Mars sample retrieval lander. The landing system would be modified to handle a load about 20 percent heavier than the sky crane used for the Curiosity and Perseverance rover landings. Credit: NASA/JPL

The science community has long identified a Mars Sample Return mission as the top priority for NASA’s planetary science program. In the National Academies’ most recent decadal survey released in 2022, a panel of researchers recommended NASA continue with the MSR program but stated the program’s cost should not undermine other planetary science missions.

Teeing up for cancellation?

That’s exactly what is happening. Budget pressures from the Mars Sample Return mission, coupled with funding cuts stemming from a bipartisan federal budget deal in 2023, have prompted NASA’s planetary science division to institute a moratorium on starting new missions.

“The decision about Mars Sample Return is not just one that affects Mars exploration,” said Curt Niebur, NASA’s lead scientist for planetary flight programs, in a question-and-answer session with solar system researchers Tuesday. “It’s going to affect planetary science and the planetary science division for the foreseeable future. So I think the entire science community should be very tuned in to this.”

Rocket Lab, which has been more open about its MSR architecture than other companies, has posted details of its sample return concept on its website. Fox declined to offer details on other commercial concepts for MSR, citing proprietary concerns.

“We can wait another year, or we can get started now,” Rocket Lab posted on X. “Our Mars Sample Return architecture will put Martian samples in the hands of scientists faster and more affordably. Less than $4 billion, with samples returned as early as 2031.”

Through its own internal development and acquisitions of other aerospace industry suppliers, Rocket Lab said it has provided components for all of NASA’s recent Mars missions. “We can deliver MSR mission success too,” the company said.

Rocket Lab’s concept for a Mars Sample Return mission. Credit: Rocket Lab

Although NASA’s deferral of a decision on MSR to the next administration might convey a lack of urgency, officials said the agency and potential commercial partners need time to assess what roles the industry might play in the MSR mission.

“They need to flesh out all of the possibilities of what’s required in the engineering for the commercial option,” Nelson said.

On the program’s current trajectory, Fox said NASA would be able to choose a new MSR architecture in mid-2026.

Waiting, rather than deciding on an MSR plan now, will also allow time for the next NASA administrator and the Trump White House to determine whether either option aligns with the administration’s goals for space exploration. In an interview with Ars last week, Nelson said he did not want to “put the new administration in a box” with any significant MSR decisions in the waning days of the Biden administration.

One source with experience in crafting and implementing US space policy told Ars that Nelson’s deferral on a decision will “tee up MSR for canceling.” Faced with a decision to spend billions of dollars on a robotic sample return or billions of dollars to go toward a human mission to Mars, the Trump administration will likely choose the latter, the source said.

If that happens, NASA science funding could be freed up for other pursuits in planetary science. The second priority identified in the most recent planetary decadal survey is an orbiter and atmospheric probe to explore Uranus and its icy moons. NASA has held off on the development of a Uranus mission to focus on the Mars Sample Return first.

Science and geopolitics

Whether it’s with robots or humans, there’s a strong case for bringing pristine Mars samples back to Earth. The titanium tubes carried by the Perseverance rover contain rock cores, loose soil, and air samples from the Martian atmosphere.

“Bringing them back will revolutionize our understanding of the planet Mars and indeed, our place in the solar system,” Fox said. “We explore Mars as part of our ongoing efforts to safely send humans to explore farther and farther into the solar system, while also … getting to the bottom of whether Mars once supported ancient life and shedding light on the early solar system.”

Researchers can perform more detailed examinations of Mars specimens in sophisticated laboratories on Earth than possible with the miniature instruments delivered to the red planet on a spacecraft. Analyzing samples in a terrestrial lab might reveal biosignatures, or the traces of ancient life, that elude detection with instruments on Mars.

“The samples that we have taken by Perseverance actually predate—they are older than any of the samples or rocks that we could take here on Earth,” Fox said. “So it allows us to kind of investigate what the early solar system was like before life began here on Earth, which is amazing.”

Fox said returning Mars samples before a human expedition would help NASA prioritize where astronauts should land on the red planet.

In a statement, the Planetary Society said it is “concerned that NASA is again delaying a decision on the program, committing only to additional concept studies.”

“It has been more than two years since NASA paused work on MSR,” the Planetary Society said. “It is time to commit to a path forward to ensure the return of the samples already being collected by the Perseverance rover.

“We urge the incoming Trump administration to expedite a decision on a path forward for this ambitious project, and for Congress to provide the funding necessary to ensure the return of these priceless samples from the Martian surface.”

China says it is developing its own mission to bring Mars rocks back to Earth. Named Tianwen-3, the mission could launch as soon as 2028 and return samples to Earth by 2031. While NASA’s plan would bring back carefully curated samples from an expansive environment that may have once harbored life, China’s mission will scoop up rocks and soil near its landing site.

“They’re just going to have a mission to grab and go—go to a landing site of their choosing, grab a sample and go,” Nelson said. “That does not give you a comprehensive look for the scientific community. So you cannot compare the two missions. Now, will people say that there’s a race? Of course, people will say that, but it’s two totally different missions.”

Still, Nelson said he wants NASA to be first. He said he has not had detailed conversations with Trump’s NASA transition team.

“I think it was a responsible thing to do, not to hand the new administration just one alternative if they want to have a Mars Sample Return,” Nelson said. “I can’t imagine that they don’t. I don’t think we want the only sample return coming back on a Chinese spacecraft.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

NASA defers decision on Mars Sample Return to the Trump administration Read More »

as-us-marks-first-h5n1-bird-flu-death,-who-and-cdc-say-risk-remains-low

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low

The H5N1 bird flu situation in the US seems more fraught than ever this week as the virus continues to spread swiftly in dairy cattle and birds while sporadically jumping to humans.

On Monday, officials in Louisiana announced that the person who had developed the country’s first severe H5N1 infection had died of the infection, marking the country’s first H5N1 death. Meanwhile, with no signs of H5N1 slowing, seasonal flu is skyrocketing, raising anxiety that the different flu viruses could mingle, swap genetic elements, and generate a yet more dangerous virus strain.

But, despite the seemingly fever-pitch of viral activity and fears, a representative for the World Health Organization today noted that risk to the general population remains low—as long as one critical factor remains absent: person-to-person spread.

“We are concerned, of course, but we look at the risk to the general population and, as I said, it still remains low,” WHO spokesperson Margaret Harris told reporters at a Geneva press briefing Tuesday in response to questions related to the US death. In terms of updating risk assessments, you have to look at how the virus behaved in that patient and if it jumped from one person to another person, which it didn’t, Harris explained. “At the moment, we’re not seeing behavior that’s changing our risk assessment,” she added.

In a statement on the death late Monday, the US Centers for Disease Control and Prevention emphasized that no human-to-human transmission has been identified in the US. To date, there have been 66 documented human cases of H5N1 infections since the start of 2024. Of those, 40 were linked to exposure to infected dairy cows, 23 were linked to infected poultry, two had no clear source, and one case—the fatal case in Louisiana—was linked to exposure to infected backyard and wild birds.

As US marks first H5N1 bird flu death, WHO and CDC say risk remains low Read More »

science-paper-piracy-site-sci-hub-shares-lots-of-retracted-papers

Science paper piracy site Sci-Hub shares lots of retracted papers

Most scientific literature is published in for-profit journals that rely on subscriptions and paywalls to turn a profit. But that trend has been shifting as various governments and funding agencies are requiring that the science they fund be published in open-access journals. The transition is happening gradually, though, and a lot of the historical literature remains locked behind paywalls.

These paywalls can pose a problem for researchers who aren’t at well-funded universities, including many in the Global South, which may not be able to access the research they need to understand in order to pursue their own studies. One solution has been Sci-Hub, a site where people can upload PDFs of published papers so they can be shared with anyone who can access the site. Despite losses in publishing industry lawsuits and attempts to block access, Sci-Hub continues to serve up research papers that would otherwise be protected by paywalls.

But what it’s serving up may not always be the latest and greatest. Generally, when a paper is retracted for being invalid, publishers issue an updated version of its PDF with clear indications that the research it contains should no longer be considered valid. Unfortunately, it appears that once Sci-Hub has a copy of a paper, it doesn’t necessarily have the ability to ensure it’s kept up to date. Based on a scan of its content done by researchers from India, about 85 percent of the invalid papers they checked had no indication that the paper had been retracted.

Correcting the scientific record

Scientific results go wrong for all sorts of reasons, from outright fraud to honest mistakes. If the problems don’t invalidate the overall conclusions of a paper, it’s possible to update the paper with a correction. If the problems are systemic enough to undermine the results, however, the paper is typically retracted—in essence, it should be treated as if it were never published in the first place.

It doesn’t always work out that way, however. Maybe people ignore the notifications that something has been retracted, or maybe they downloaded a copy of the paper before it got retracted and never saw the notifications at all, but citations to retracted papers regularly appear in the scientific record. Over the long term, this can distort our big-picture view of science, leading to wasted effort and misallocated resources.

Science paper piracy site Sci-Hub shares lots of retracted papers Read More »

ants-vs.-humans:-solving-the-piano-mover-puzzle

Ants vs. humans: Solving the piano-mover puzzle

Who is better at maneuvering a large load through a maze, ants or humans?

The piano-mover puzzle involves trying to transport an oddly shaped load across a constricted environment with various obstructions. It’s one of several variations on classic computational motion-planning problems, a key element in numerous robotics applications. But what would happen if you pitted human beings against ants in a competition to solve the piano-mover puzzle?

According to a paper published in the Proceedings of the National Academy of Sciences, humans have superior cognitive abilities and, hence, would be expected to outperform the ants. However, depriving people of verbal or nonverbal communication can level the playing field, with ants performing better in some trials. And while ants improved their cognitive performance when acting collectively as a group, the same did not hold true for humans.

Co-author Ofer Feinerman of the Weizmann Institute of Science and colleagues saw an opportunity to use the piano-mover puzzle to shed light on group decision-making, as well as the question of whether it is better to cooperate as a group or maintain individuality. “It allows us to compare problem-solving skills and performances across group sizes and down to a single individual and also enables a comparison of collective problem-solving across species,” the authors wrote.

They decided to compare the performances of ants and humans because both species are social and can cooperate while transporting loads larger than themselves. In essence, “people stand out for individual cognitive abilities while ants excel in cooperation,” the authors wrote.

Feinerman et al. used crazy ants (Paratrechina longicornis) for their experiments, along with the human volunteers. They designed a physical version of the piano-movers puzzle involving a large t-shaped load that had to be maneuvered across a rectangular area divided into three chambers, connected via narrow slits. The load started in the first chamber on the left, and the ant and human subjects had to figure out how to transport it through the second chamber and into the third.

Ants vs. humans: Solving the piano-mover puzzle Read More »

controversial-fluoride-analysis-published-after-years-of-failed-reviews

Controversial fluoride analysis published after years of failed reviews


70 percent of studies included in the meta-analysis had a high risk of bias.

Federal toxicology researchers on Monday finally published a long-controversial analysis that claims to find a link between high levels of fluoride exposure and slightly lower IQs in children living in areas outside the US, mostly in China and India. As expected, it immediately drew yet more controversy.

The study, published in JAMA Pediatrics, is a meta-analysis, a type of study that combines data from many different studies—in this case, mostly low-quality studies—to come up with new results. None of the data included in the analysis is from the US, and the fluoride levels examined are at least double the level recommended for municipal water in the US. In some places in the world, fluoride is naturally present in water, such as parts of China, and can reach concentrations several-fold higher than fluoridated water in the US.

The authors of the analysis are researchers at the National Toxicology Program at the National Institute of Environmental Health Sciences. For context, this is the same federal research program that published a dubious analysis in 2016 suggesting that cell phones cause cancer in rats. The study underwent a suspicious peer-review process and contained questionable methods and statistics.

The new fluoride analysis shares similarities. NTP researchers have been working on the fluoride study since 2015 and submitted two drafts for peer review to an independent panel of experts at the National Academies of Sciences, Engineering, and Medicine in 2020 and 2021. The study failed its review both times. The National Academies’ reviews found fault with the methods and statistical rigor of the analysis. Specifically, the reviews noted potential bias in the selection of the studies included in the analysis, inconsistent application of risk-of-bias criteria, lack of data transparency, insufficient evaluations of potential confounding, and flawed measures of neurodevelopmental outcomes, among other problems.

After the failing reviews, the NTP selected its own reviewers and self-published the study as a monograph in August.

High risk of bias

The related analysis published Monday looked at data from 74 human studies, 45 of which were conducted in China and 12 in India. Of the 74, 52 were rated as having a high risk of bias, meaning they had designs, study methods, or statistical approaches that could skew the results.

The study’s primary meta-analysis only included 59 of the studies: 47 with a high risk of bias and 12 with a low risk. This analysis looked at standardized mean differences in children’s IQ between higher and lower fluoride exposure groups. Of the 59 studies, 41 were from China.

Among the 47 studies with a high risk of bias, the pooled difference in mean IQ scores between the higher-exposure groups and lower-exposure groups was -0.52, suggesting that higher fluoride exposure lowered IQs. But, among the 12 studies at low risk for bias, the difference was slight overall, only -0.19. And of those 12 studies, eight found no link between fluoride exposure and IQ at all.

Among 31 studies that reported fluoride levels in water, the NTP authors looked at possible IQ associations at three fluoride-level cutoffs: less than 4 mg/L, less than 2 mg/L, and less than 1.5 mg/L. Among all 31 studies, the researchers found that fluoride exposure levels of less than 4 mg/L and less than 2 mg/L were linked to statistically significant decreases in IQ. However, there was no statistically significant link at 1.5 mg/L. For context, 1.5 mg/L is a little over twice the level of fluoride recommended by the US Environmental Protection Agency for US community water, which is 0.7 mg/L. When the NTP authors looked at just the studies that had a low risk of bias—seven studies—they saw the same lack of association with the 1.5 mg/L cutoff.

The NTP authors also looked at IQ associations in 20 studies that reported urine fluoride levels and again split the analysis using the same fluoride cutoffs as before. While there did appear to be a link with lower IQ at the highest fluoride level, the two lower fluoride levels had borderline statistical significance. Ten of the 20 studies were assessed as having a low risk of bias, and for just those 10, the results were similar to the larger group.

Criticism

The inclusion of urinary fluoride measurements is sure to spark criticism. For years, experts have noted that these measurements are not standardized, can vary by day and time, and are not reflective of a person’s overall fluoride exposure.

In an editorial published alongside the NTP study today, Steven Levy, a public health dentist at the University of Iowa, blasted the new analysis, including the urinary sample measurements.

“There is scientific consensus that the urinary sample collection approaches used in almost all included studies (ie, spot urinary fluoride or a few 24-hour samples, many not adjusted for dilution) are not valid measures of individuals’ long-term fluoride exposure, since fluoride has a short half-life and there is substantial variation within days and from day to day,” Levy wrote.

Overall, Levy reiterated much of the same concerns from the National Academies’ reviews, noting the study’s lack of transparency, the reliance on highly biased studies, questionable statistics, and questionable exclusion of newer, higher-quality studies, which have found no link between water fluoridation and children’s IQ. For instance, one exclusion was a 2023 study out of Australia that found “Exposure to fluoridated water during the first 5 [years] of life was not associated with altered measures of child emotional and behavioral development and executive functioning.” A 2022 study out of Spain similarly found no risk of prenatal exposure.

“Taking these many important concerns together, readers are advised to be very cautious in drawing conclusions about possible associations of fluoride exposures with lower IQ,” Levy concluded. “This is especially true for lower water fluoride levels.”

Another controversial study

But, the debate on water fluoridation is unlikely to recede anytime soon. In a second editorial published alongside the NTP study, other researchers praised the analysis, calling for health organizations and regulators to reassess fluoridation.

“The absence of a statistically significant association of water fluoride less than 1.5 mg/L and children’s IQ scores in the dose-response meta-analysis does not exonerate fluoride as a potential risk for lower IQ scores at levels found in fluoridated communities,” the authors argue, noting there are additional sources of fluoride, such as toothpaste and foods.

The EPA estimates that 40 to 70 percent of people’s fluoride exposure comes from water.

Two of the three authors of the second editorial—Christine Till and Bruce Lanphear—were authors of a highly controversial 2019 study out of Canada suggesting that fluoride intake during pregnancy could reduce children’s IQ. The authors even suggested that pregnant people should reduce their fluoride intake. But, the study, also published in JAMA Pediatrics, only found a link between maternal fluoride levels and IQ in male children. There was no association in females.

The study drew heavy backlash, with blistering responses published in JAMA Pediatrics. In one response, UK researchers essentially accused Till and colleagues of a statistical fishing expedition to find a link.

“[T]here was no significant IQ difference between children from fluoridated and nonfluoridated communities and no overall association with maternal urinary fluoride (MUFSG). The authors did not mention this and instead emphasized the significant sex interaction, where the association appeared for boys but not girls. No theoretical rationale for this test was provided; in the absence of a study preregistration, we cannot know whether it was planned a priori. If not, the false-positive probability increases because there are many potential subgroups that might show the result by chance.”

Other researchers criticized the study’s statistics, lack of data transparency, the use of maternal urine sampling, and the test they used to assess the IQ of children ages 3 and 4.

Photo of Beth Mole

Beth is Ars Technica’s Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes.

Controversial fluoride analysis published after years of failed reviews Read More »

fast-radio-bursts-originate-near-the-surface-of-stars

Fast radio bursts originate near the surface of stars

One of the two papers published on Wednesday looks at the polarization of the photons in the burst itself, finding that the angle of polarization changes rapidly over the 2.5 milliseconds that FRB 20221022A lasted. The 130-degree rotation that occurred follows an S-shaped pattern, which has already been observed in about half of the pulsars we’ve observed—neutron stars that rotate rapidly and sweep a bright jet across the line of sight with Earth, typically multiple times each second.

The implication of this finding is that the source of the FRB is likely to also be on a compact, rapidly rotating object. Or at least this FRB. As of right now, this is the only FRB that we know displays this sort of behavior. While not all pulsars show this pattern of rotation, half of them do, and we’ve certainly observed enough FRBs we should have picked up others like this if they occurred at an appreciable rate.

Scattered

The second paper performs a far more complicated analysis, searching for indications of interactions between the FRB and the interstellar medium that exists within galaxies. This will have two effects. One, caused by scattering off interstellar material, will spread the burst out over time in a frequency-dependent manner. Scattering can also cause a random brightening/dimming of different areas of the spectrum, called scintillation, and somewhat analogous to the twinkling of stars caused by our atmosphere.

In this case, the photons of the FRB have had three encounters with matter that can induce these effects: the sparse intersteller material of the source galaxy, the equally sparse interstellar material in our own Milky Way, and the even more sparse intergalactic material in between the two. Since the source galaxy for FRB 20221022A is relatively close to our own, the intergalactic medium can be ignored, leaving the detection with two major sources of scattering.

Fast radio bursts originate near the surface of stars Read More »

one-less-thing-to-worry-about-in-2025:-yellowstone-probably-won’t-go-boom

One less thing to worry about in 2025: Yellowstone probably won’t go boom


There’s not enough melted material near the surface to trigger a massive eruption.

It’s difficult to comprehend what 1,000 cubic kilometers of rock would look like. It’s even more difficult to imagine it being violently flung into the air. Yet the Yellowstone volcanic system blasted more than twice that amount of rock into the sky about 2 million years ago, and it has generated a number of massive (if somewhat smaller) eruptions since, and there have been even larger eruptions deeper in the past.

All of which might be enough to keep someone nervously watching the seismometers scattered throughout the area. But a new study suggests that there’s nothing to worry about in the near future: There’s not enough molten material pooled in one place to trigger the sort of violent eruptions that have caused massive disruptions in the past. The study also suggests that the primary focus of activity may be shifting outside of the caldera formed by past eruptions.

Understanding Yellowstone

Yellowstone is fueled by what’s known as a hotspot, where molten material from the Earth’s mantle percolates up through the crust. The rock that comes up through the crust is typically basaltic (a definition based on the ratio of elements in its composition) and can erupt directly. This tends to produce relatively gentle eruptions where lava flows across a broad area, generally like you see in Hawaii and Iceland. But this hot material can also melt rock within the crust, producing a material called rhyolite. This is a much more viscous material that does not flow very readily and, instead, can cause explosive eruptions.

The risks at Yellowstone are rhyolitic eruptions. But it can be difficult to tell the two types of molten material apart, at least while they’re several kilometers below the surface. Various efforts have been made over the years to track the molten material below Yellowstone, but differences in resolution and focus have left many unanswered questions.

Part of the problem is that a lot of this data came from studies of seismic waves traveling through the region. Their travel is influenced by various factors, including the composition of the material they’re traveling through, its temperature, and whether it’s a liquid or solid. In a lot of cases, this leaves several potential solutions consistent with the seismic data—you can potentially see the same behavior from different materials at different temperatures.

To get around this issue, the new research measured the conductivity of the rock, which can change by as much as three orders of magnitude when transitioning from a solid to a molten phase. The overall conductivity we measure also increases as more of the molten material is connected into a single reservoir rather than being dispersed into individual pockets.

This sort of “magnetotelluric” data has been obtained in the past but at a relatively low resolution. For the new study, a dense array of sensors was placed in the Yellowstone caldera and many surrounding areas to the north and east. (You can compare the previous and new recording sites as black and red triangles on this map.)

Yellowstone’s plumbing

That has allowed the research team to build a three-dimensional map of the molten material underneath Yellowstone and to determine the fraction of the material in a given area that’s molten. The team finds that there are two major sources of molten material that extend up from the mantle-crust boundary at about 50 kilometers below the surface. These extend upward separately but merge about 20 kilometers below the surface.

Image of two large yellow lobes sitting below a smaller collection of reddish orange blobs of material. These are matched with features on the surface, including the present caldera and the sites of past eruptions.

Underneath Yellowstone: Two large lobs of hot material from the mantle (in yellow) melt rock closer to the surface (orange), creating pools of hot material (red and orange) that power hydrothermal systems and past eruptions, and may be the sites of future activity. Credit: Bennington, et al.

While they collectively contain a lot of molten basaltic material (between 4,000 and 6,500 cubic kilometers of it), it’s not very concentrated. Instead, this is mostly relatively small volumes of molten material traveling through cracks and faults in solid rock. This keeps the concentration of molten material below that needed to enable eruptions.

After the two streams of basaltic material merge, they form a reservoir that includes a significant amount of melted crustal material—meaning rhyolitic. The amount of rhyolitic material here is, at most, under 500 cubic kilometers, so it could fuel a major eruption, albeit a small one by historic Yellowstone standards. But again, the fraction of melted material in this volume of rock is relatively low and not considered likely to enable eruptions.

From there to the surface, there are several distinct features. Relative to the hotspot, the North American plate above is moving to the west, which has historically meant that the site of eruptions has moved from west to east across the continent. Accordingly, there is a pool off to the west of the bulk of near-surface molten material that no longer seems to be connected to the rest of the system. It’s small, at only about 100 cubic kilometers of material, and is too diffused to enable a large eruption.

Future risks?

There’s a similar near-surface blob of molten material that may not currently be connected to the rest of the molten material to the south of that. It’s even smaller, likely less than 50 cubic kilometers of material. But it sits just below a large blob of molten basalt, so it is likely to be receiving a fair amount of heat input. This site seems to have also fueled the most recent large eruption in the caldera. So, while it can’t fuel a large eruption today, it’s not possible to rule the site out for the future.

Two other near-surface areas containing molten material appear to power two of the major sites of hydrothermal activity, the Norris Geyser Basin and Hot Springs Basin. These are on the northern and eastern edges of the caldera, respectively. The one to the east contains a small amount of material that isn’t concentrated enough to trigger eruptions.

But the site to the northeast contains the largest volume of rhyolitic material, with up to nearly 500 cubic kilometers. It’s also one of only two regions with a direct connection to the molten material moving up through the crust. So, while it’s not currently poised to erupt, this appears to be the most likely area to trigger a major eruption in the future.

In summary, while there’s a lot of molten material near the current caldera, all of it is spread too diffusely within the solid rock to enable it to trigger a major eruption. Significant changes will need to take place before we see the site cover much of North America with ash again. Beyond that, the image is consistent with our big-picture view of the Yellowstone hotspot, which has left a trail of eruptions across western North America, driven by the movement of the North American plate.

That movement has now left one pool of molten material on the west of the caldera disconnected from any heat sources, which will likely allow it to cool. Meanwhile, the largest pool of near-surface molten rock is east of the caldera, which may ultimately drive a transition of explosive eruptions outside the present caldera.

Nature, 2025. DOI: 10.1038/s41586-024-08286-z  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

One less thing to worry about in 2025: Yellowstone probably won’t go boom Read More »

delve-into-the-physics-of-the-hula-hoop

Delve into the physics of the Hula-Hoop

High-speed video of experiments on a robotic hula hooper, whose hourglass form holds the hoop up and in place.

Some version of the Hula-Hoop has been around for millennia, but the popular plastic version was introduced by Wham-O in the 1950s and quickly became a fad. Now, researchers have taken a closer look at the underlying physics of the toy, revealing that certain body types are better at keeping the spinning hoops elevated than others, according to a new paper published in the Proceedings of the National Academy of Sciences.

“We were surprised that an activity as popular, fun, and healthy as hula hooping wasn’t understood even at a basic physics level,” said co-author Leif Ristroph of New York University. “As we made progress on the research, we realized that the math and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

Ristroph’s lab frequently addresses these kinds of colorful real-world puzzles. For instance, in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. In 2021, the Ristroph lab looked into the formation processes underlying so-called “stone forests” common in certain regions of China and Madagascar.

In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design, and measured the flow of water through the valve in both directions at various pressures. They found the water flowed about two times slower in the nonpreferred direction. In 2022, Ristroph studied the surpassingly complex aerodynamics of what makes a good paper airplane—specifically, what is needed for smooth gliding.

Girl twirling a Hula hoop, 1958

Girl twirling a Hula-Hoop in 1958 Credit: George Garrigues/CC BY-SA 3.0

And last year, Ristroph’s lab cracked the conundrum of physicist Richard Feynman’s “reverse sprinkler” problem, concluding that the reverse sprinkler rotates a good 50 times slower than a regular sprinkler but operates along similar mechanisms. The secret is hidden inside the sprinkler, where there are jets that make it act like an inside-out rocket. The internal jets don’t collide head-on; rather, as water flows around the bends in the sprinkler arms, it is slung outward by centrifugal force, leading to asymmetric flow.

Delve into the physics of the Hula-Hoop Read More »

manta-rays-inspire-faster-swimming-robots-and-better-water-filters

Manta rays inspire faster swimming robots and better water filters

This robot can also dive and come back to the surface. Faster flapping results in strong downward waves that will push the robot upward, while slower flapping creates weaker upward waves that allow it to go further down. (Actual mantas sink if they slow down.)  It also proved it could fetch a payload from the bottom of a tank and bring it to the surface.

Eating on the fly

Because manta rays are essentially giant moving water filters, researchers from MIT looked to them and other mobula rays (a group that includes mantas and devil rays) for inspiration when figuring out potential improvements to industrial water filters.

Mantas feed by leaving their mouths open as they swim. At the bottom of either side of a manta’s mouth are structures known as mouthplates, which look something like a dashboard air conditioner. When water enters the mouth, plankton particles too large to pass through the plates bounce further down into the manta’s body cavity and, eventually, to its stomach. Gills absorb oxygen from the water that gushes out so the manta can breathe.

The MIT team was especially interested in mobula rays because they thought the animals struck an ideal balance between allowing water in quickly enough to breathe while maintaining highly selective structures that prevent most plankton from escaping into the water. To create a filter as close to a mobula ray as possible, the team 3D-printed plates that were then glued together to create narrow openings between them. Particles that do not pass instead flow away into a waste reservoir.

With slow pumping, water and smaller particles flowed out of the filter. When pumping was sped up, the water created a vortex in each opening that allowed water, but not particles, through. The team realized that this is how mobula rays are such successful filter feeders. They must know the right speed to swim so they can breathe and still get an optimal amount of plankton filtered into their mouths.

The team thinks that incorporating vortex action will “expand the traditional design of [industrial] filters,” as they said in a study recently published in PNAS.

Manta rays may look alien, but there is nothing sci-fi about how they use physics to their advantage, from powerful swimming to efficient (and simultaneous) eating and breathing. Sometimes nature comes through with the most ingenious tech upgrades.

Science Advances, 2024. DOI: 10.1126/sciadv.adq4222

PNAS, 2024. DOI:  10.1073/pnas.241001812

Manta rays inspire faster swimming robots and better water filters Read More »

evolution-journal-editors-resign-en-masse

Evolution journal editors resign en masse


an emerging form of protest?

Board members expressed concerns over high fees, editorial independence, and use of AI in editorial processes.

Over the holiday weekend, all but one member of the editorial board of Elsevier’s Journal of Human Evolution (JHE) resigned “with heartfelt sadness and great regret,” according to Retraction Watch, which helpfully provided an online PDF of the editors’ full statement. It’s the 20th mass resignation from a science journal since 2023 over various points of contention, per Retraction Watch, many in response to controversial changes in the business models used by the scientific publishing industry.

“This has been an exceptionally painful decision for each of us,” the board members wrote in their statement. “The editors who have stewarded the journal over the past 38 years have invested immense time and energy in making JHE the leading journal in paleoanthropological research and have remained loyal and committed to the journal and our authors long after their terms ended. The [associate editors] have been equally loyal and committed. We all care deeply about the journal, our discipline, and our academic community; however, we find we can no longer work with Elsevier in good conscience.”

The editorial board cited several changes made over the last ten years that it believes are counter to the journal’s longstanding editorial principles. These included eliminating support for a copy editor and a special issues editor, leaving it to the editorial board to handle those duties. When the board expressed the need for a copy editor, Elsevier’s response, they said, was “to maintain that the editors should not be paying attention to language, grammar, readability, consistency, or accuracy of proper nomenclature or formatting.”

There is also a major restructuring of the editorial board underway that aims to reduce the number of associate editors by more than half, which “will result in fewer AEs handling far more papers, and on topics well outside their areas of expertise.”

Furthermore, there are plans to create a third-tier editorial board that functions largely in a figurehead capacity, after Elsevier “unilaterally took full control” of the board’s structure in 2023 by requiring all associate editors to renew their contracts annually—which the board believes undermines its editorial independence and integrity.

Worst practices

In-house production has been reduced or outsourced, and in 2023 Elsevier began using AI during production without informing the board, resulting in many style and formatting errors, as well as reversing versions of papers that had already been accepted and formatted by the editors. “This was highly embarrassing for the journal and resolution took six months and was achieved only through the persistent efforts of the editors,” the editors wrote. “AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage.”

In addition, the author page charges for JHE are significantly higher than even Elsevier’s other for-profit journals, as well as broad-based open access journals like Scientific Reports. Not many of the journal’s authors can afford those fees, “which runs counter to the journal’s (and Elsevier’s) pledge of equality and inclusivity,” the editors wrote.

The breaking point seems to have come in November, when Elsevier informed co-editors Mark Grabowski (Liverpool John Moores University) and Andrea Taylor (Touro University California College of Osteopathic Medicine) that it was ending the dual-editor model that has been in place since 1986. When Grabowki and Taylor protested, they were told the model could only remain if they took a 50 percent cut in their compensation.

Elsevier has long had its share of vocal critics (including our own Chris Lee) and this latest development has added fuel to the fire. “Elsevier has, as usual, mismanaged the journal and done everything they could to maximize profit at the expense of quality,” biologist PZ Myers of the University of Minnesota Morris wrote on his blog Pharyngula. “In particular, they decided that human editors were too expensive, so they’re trying to do the job with AI. They also proposed cutting the pay for the editor-in-chief in half. Keep in mind that Elsevier charges authors a $3990 processing fee for each submission. I guess they needed to improve the economics of their piratical mode of operation a little more.”

Elsevier has not yet responded to Ars’ request for comment; we will update accordingly should a statement be issued.

Not all AI uses are created equal

John Hawks, an anthropologist at the University of Wisconsin, Madison, who has published 17 papers in JHE over his career, expressed his full support for the board members’ decision on his blog, along with shock at the (footnoted) revelation that Elsevier had introduced AI to its editorial process in 2023. “I’ve published four articles in the journal during the last two years, including one in press now, and if there was any notice to my co-authors or me about an AI production process, I don’t remember it,” he wrote, noting that the move violates the journal’s own AI policies. “Authors should be informed at the time of submission how AI will be used in their work. I would have submitted elsewhere if I was aware that AI would potentially be altering the meaning of the articles.”

There is certainly cause for concern when it comes to using AI in the pursuit of science. For instance, earlier this year, we witnessed the viral sensation of several egregiously bad AI-generated figures published in a peer-reviewed article in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals. The paper has since been retracted, but the incident reinforces a growing concern that AI will make published scientific research less trustworthy, even as it increases productivity.

That said, there are also some useful applications of AI in the scientific endeavor. For instance, back in January, the research publisher Science announced that all of its journals would begin using commercial software that automates the process of detecting improperly manipulated images. Perhaps that would have caught the egregious rat genitalia figure, although as Ars Science Editor John Timmer pointed out at the time, the software has limitations. “While it will catch some of the most egregious cases of image manipulation, enterprising fraudsters can easily avoid being caught if they know how the software operates,” he wrote.

Hawks acknowledged on his blog that the use of AI by scientists and scientific journals is likely inevitable and even recognizes the potential benefits. “I don’t think this is a dystopian future. But not all uses of machine learning are equal,” he wrote. To wit:

[I]t’s bad for anyone to use AI to reduce or replace the scientific input and oversight of people in research—whether that input comes from researchers, editors, reviewers, or readers. It’s stupid for a company to use AI to divert experts’ effort into redundant rounds of proofreading, or to make disseminating scientific work more difficult.

In this case, Elsevier may have been aiming for good but instead hit the exacta of bad and stupid. It’s especially galling that they demand transparency from authors but do not provide transparency about their own processes… [I]t would be a very good idea for authors of recent articles to make sure that they have posted a preprint somewhere, so that their original pre-AI version will be available for readers. As the editors lose access, corrections to published articles may become difficult or impossible.

Nature published an article back in March raising questions about the efficacy of mass resignations as an emerging form of protest after all the editors of the Wiley-published linguistics journal Syntax resigned in February. (Several of their concerns mirror those of the JHE editorial board.) Such moves certainly garner attention, but even former Syntax editor Klaus Abels of University College London told Nature that the objective of such mass resignations should be on moving beyond mere protest, focusing instead on establishing new independent nonprofit journals for the academic community that are open access and have high academic standards.

Abels and his former Syntax colleagues are in the process of doing just that, following the example of the former editors of Critical Public Health and another Elsevier journal, NeuroImage, last year.

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Evolution journal editors resign en masse Read More »

frogfish-reveals-how-it-evolved-the-“fishing-rod”-on-its-head

Frogfish reveals how it evolved the “fishing rod” on its head

In most bony fish, or teleosts, motor neurons for fins are found on the sides (ventrolateral zone) of the underside (ventral horn) of the spinal cord. The motor neurons controlling the illicium of frogfish are in their own cluster and located in the dorsolateral zone. In fish, this is unusual.

“The peculiar location of fishing motor neurons, with little doubt, is linked with the specialization of the illicium serving fishing behavior,” the team said in a study recently published in the Journal of Comparative Neurology.

Fishing for answers

So what does this have to do with evolution? The white-spotted pygmy filefish might look nothing like a frogfish and has no built-in fishing lure, but it is still a related species and can possibly tell us something.

While the first dorsal fin of the filefish doesn’t really move—it is thought that its main purpose is to scare off predators by looking menacing—there are still motor neurons that control it. Motor neurons for the first dorsal fin of filefish were found in the same location as motor neurons for the second, third and fourth dorsal fins in frogfish. In frogfish, these fins also do not move much while swimming, but can appear threatening to a predator.

If the same types of motor neurons control non-moving fins in both species, the frogfish has something extra when it comes to the function and location of motor neurons controlling the illicium.

Yamamoto thinks the unique group of fishing motor neurons found in frogfish suggests that, as a result of evolution, “the motor neurons for the illicium [became] segregated from other motor neurons” to end up in their own distinct cluster away from motor neurons controlling other fins, as he said in the study.

What exactly caused the functional and locational shift of motor neurons that give the frogfish’s illicium its function is still a mystery. How the brain influences their fishing behavior is another area that needs to be investigated.

While Yamamoto and his team speculate that specific regions of the brain send messages to the fishing motor neurons, they do not yet know which regions are involved, and say that more studies need to be carried out on other species of fish and the groups of motor neurons that power each of their dorsal fins.

In the meantime, the frogfish will continue being its freaky self.

Journal of Comparative Neurology, 2024. DOI: 10.1002/cne.25674

Frogfish reveals how it evolved the “fishing rod” on its head Read More »

ten-cool-science-stories-we-almost-missed

Ten cool science stories we almost missed


Bronze Age combat, moral philosophy and Reddit’s AITA, Mondrian’s fractal tree, and seven other fascinating papers.

There is rarely time to write about every cool science paper that comes our way; many worthy candidates sadly fall through the cracks over the course of the year. But as 2024 comes to a close, we’ve gathered ten of our favorite such papers at the intersection of science and culture as a special treat, covering a broad range of topics: from reenacting Bronze Age spear combat and applying network theory to the music of Johann Sebastian Bach, to Spider-Man inspired web-slinging tech and a mathematical connection between a turbulent phase transition and your morning cup of coffee. Enjoy!

Reenacting Bronze Age spear combat

Experiment with experienced fighters who spar freely using different styles.

An experiment with experienced fighters who spar freely using different styles. Credit: Valerio Gentile/CC BY

The European Bronze Age saw the rise of institutionalized warfare, evidenced by the many spearheads and similar weaponry archaeologists have unearthed. But how might these artifacts be used in actual combat? Dutch researchers decided to find out by constructing replicas of Bronze Age shields and spears and using them in realistic combat scenarios. They described their findings in an October paper published in the Journal of Archaeological Science.

There have been a couple of prior experimental studies on bronze spears, but per Valerio Gentile (now at the University of Gottingen) and coauthors, practical research to date has been quite narrow in scope, focusing on throwing weapons against static shields. Coauthors C.J. van Dijk of the National Military Museum in the Netherlands and independent researcher O. Ter Mors each had more than a decade of experience teaching traditional martial arts, specializing in medieval polearms and one-handed weapons. So they were ideal candidates for testing the replica spears and shields.

Of course, there is no direct information on prehistoric fighting styles, so van Dijk and Mors relied on basic biomechanics of combat movements with similar weapons detailed in historic manuals. They ran three versions of the experiment: one focused on engagement and controlled collisions, another on delivering wounding body blows, and the third on free sparring. They then studied wear marks left on the spearheads and found they matched the marks found on similar genuine weapons excavated from Bronze Age sites. They also gleaned helpful clues to the skills required to use such weapons.

DOI: Journal of Archaeological Science, 2024. 10.1016/j.jas.2024.106044 (About DOIs).

Physics of Ned Kahn’s kinetic sculptures

Ned Kahn's Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania.

Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania. Credit: Ned Kahn

Environmental artist and sculptor Ned Kahn is famous for his kinematic building facades, inspired by his own background in science. An exterior wall on the Children’s Museum of Pittsburgh, for instance, consists of hundreds of flaps that move in response to wind, creating distinctive visual patterns. Kahn used the same method to create his Shimmer Wall at Philadelphia’s Franklin Institute, as well as several other similar projects.

Physicists at Sorbonne Universite in Paris have studied videos of Kahn’s kinetic facades and conducted experiments to measure the underlying physical mechanisms, outlined in a November paper published in the journal Physical Review Fluids. The authors analyzed 18 YouTube videos taken of six of Kahn’s kinematic facades, working with Kahn and building management to get the dimensions of the moving plates, scaling up from the video footage to get further information on spatial dimensions.

They also conducted their own wind tunnel experiments, using strings of pendulum plates. Their measurements confirmed that the kinetic patterns were propagating waves to create the flickering visual effects. The plates’ movement is driven primarily by their natural resonant frequencies at low speeds, and by pressure fluctuations from the wind at higher speeds.

DOI: Physical Review Fluids, 2024. 10.1103/PhysRevFluids.9.114604 (About DOIs).

How brewing coffee connects to turbulence

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate the puff

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate puff “traffic jams.” Credit: Grégoire Lemoult et al., 2024

Physicists have been studying turbulence for centuries, particularly the transitional period where flows shift from predictably smooth (laminar flow) to highly turbulent. That transition is marked by localized turbulent patches known as “puffs,” which often form in fluids flowing through a pipe or channel. In an October paper published in the journal Nature Physics, physicists used statistical mechanics to reveal an unexpected connection between the process of brewing coffee and the behavior of those puffs.

Traditional mathematical models of percolation date back to the 1940s. Directed percolation is when the flow occurs in a specific direction, akin to how water moves through freshly ground coffee beans, flowing down in the direction of gravity. There’s a sweet spot for the perfect cuppa, where the rate of flow is sufficiently slow to absorb most of the flavor from the beans, but also fast enough not to back up in the filter. That sweet spot in your coffee brewing process corresponds to the aforementioned laminar-turbulent transition in pipes.

Physicist Nigel Goldenfeld of the University of California, San Diego, and his coauthors used pressure sensors to monitor the formation of puffs in a pipe, focusing on how puff-to-puff interactions influenced each other’s motion. Next, they tried to mathematically model the relevant phase transitions to predict puff behavior. They found that the puffs behave much like cars moving on a freeway during rush hour: they are prone to traffic jams—i.e., when a turbulent patch matches the width of the pipe, causing other puffs to build up behind it—that form and dissipate on their own. And they tend to “melt” at the laminar-turbulent transition point.

DOI: Nature Physics, 2024. 10.1038/s41567-024-02513-0 (About DOIs).

Network theory and Bach’s music

In a network representation of music, notes are represented by nodes, and transition between notes are represented by directed edges connecting the nodes. Credit: S. Kulkarni et al., 2024

When you listen to music, does your ability to remember or anticipate the piece tell you anything about its structure? Physicists at the University of Pennsylvania developed a model based on network theory to do just that, describing their work in a February paper published in the journal Physical Review Research. Johann Sebastian Bach’s works were an ideal choice given the highly mathematical structure, plus the composer was so prolific, across so many very different kinds of musical compositions—preludes, fugues, chorales, toccatas, concertos, suites, and cantatas—as to allow for useful comparisons.

First, the authors built a simple “true” network for each composition, in which individual notes served as “nodes” and the transitions from note to note served as “edges” connecting them. Then they calculated the amount of information in each network. They found it was possible to tell the difference between compositional forms based on their information content (entropy). The more complex toccatas and fugues had the highest entropy, while simpler chorales had the lowest.

Next, the team wanted to quantify how effectively this information was communicated to the listener, a task made more difficult by the innate subjectivity of human perception. They developed a fuzzier “inferred” network model for this purpose, capturing an essential aspect of our perception: we find a balance between accuracy and cost, simplifying some details so as to make it easier for our brains to process incoming information like music.

The results: There were fewer differences between the true and inferred networks for Bach’s compositions than for randomly generated networks, suggesting that clustering and the frequent repetition of transitions (represented by thicker edges) in Bach networks were key to effectively communicating information to the listener. The next step is to build a multi-layered network model that incorporates elements like rhythm, timbre, chords, or counterpoint (a Bach specialty).

DOI: Physical Review Research, 2024. 10.1103/PhysRevResearch.6.013136 (About DOIs).

The philosophy of Reddit’s AITA

Count me among the many people practically addicted to Reddit’s “Am I the Asshole” (AITA) forum. It’s such a fascinating window into the intricacies of how flawed human beings navigate different relationships, whether personal or professional. That’s also what makes it a fantastic source of illustrative common-place dilemmas of moral decision-making for philosophers like Daniel Yudkin of the University of Pennsylvania. Relational context matters, as Yudkin and several co-authors ably demonstrated in a PsyArXiv preprint earlier this year.

For their study, Yudkin et al. compiled a dataset of nearly 370,000 AITA posts, along with over 11 million comments, posted between 2018 and 2021. They used machine learning to analyze the language used to sort all those posts into different categories. They relied on an existing taxonomy identifying six basic areas of moral concern: fairness/proportionality, feelings, harm/offense, honesty, relational obligation, and social norms.

Yudkin et al. identified 29 of the most common dilemmas in the AITA dataset and grouped them according to moral theme. Two of the most common were relational transgression and relational omission (failure to do what was expected), followed by behavioral over-reaction and unintended harm. Cheating and deliberate misrepresentation/dishonesty were the moral dilemmas rated most negatively in the dataset—even more so than intentional harm. Being judgmental was also evaluated very negatively, as it was often perceived as being self-righteous or hypocritical. The least negatively evaluated dilemmas were relational omissions.

As for relational context, cheating and broken promise dilemmas typically involved romantic partners like boyfriends rather than one’s mother, for example, while mother-related dilemmas more frequently fell under relational omission. Essentially, “people tend to disappoint their mothers but be disappointed by their boyfriends,” the authors wrote. Less close relationships, by contrast, tend to be governed by “norms of politeness and procedural fairness.” Hence, Yudkin et al. prefer to think of morality “less as a set of abstract principles and more as a ‘relational toolkit,’ guiding and constraining behavior according to the demands of the social situation.”

DOI: PsyArXiv, 2024. 10.31234/osf.io/5pcew (About DOIs).

Fractal scaling of trees in art

De grijze boom (Gray tree) Piet Mondrian, 1911.

De grijze boom (Gray tree) by Piet Mondrian, 1911. Credit: Public domain

Leonardo da Vinci famously invented a so-called “rule of trees” as a guide to realistically depicting trees in artistic representations according to their geometric proportions. In essence, if you took all the branches of a given tree, folded them up and compressed them into something resembling a trunk, that trunk would have the same thickness from top to bottom. That rule in turn implies a fractal branching pattern, with a scaling exponent of about 2 describing the proportions between the diameters of nearby boughs and the number of boughs with a given diameter.

According to the authors of a preprint posted to the physics arXiv in February, however, recent biological research suggests a higher scaling exponent of 3 known as Murray’s Law, for the rule of trees. Their analysis of 16th century Islamic architecture, Japanese paintings from the Edo period, and 20th century European art showed fractal scaling between 1.5 and 2.5. However, when they analyzed an abstract tree painting by Piet Mondrian, they found it exhibited fractal scaling of 3, before mathematicians had formulated Murray’s Law, even though Mondrian’s tree did not feature explicit branching.

The findings intrigued physicist Richard Taylor of the University of Oregon, whose work over the last 20 years includes analyzing fractal patterns in the paintings of Jackson Pollock. “In particular, I thought the extension to Mondrian’s ‘trees’ was impressive,” he told Ars earlier this year. “I like that it establishes a connection between abstract and representational forms. It makes me wonder what would happen if the same idea were to be applied to Pollock’s poured branchings.”

Taylor himself published a 2022 paper about climate change and how nature’s stress-reducing fractals might disappear in the future. “If we are pessimistic for a moment, and assume that climate change will inevitably impact nature’s fractals, then our only future source of fractal aesthetics will be through art, design and architecture,” he said. “This brings a very practical element to studies like [this].”

DOI: arXiv, 2024. 10.48550/arXiv.2402.13520 (About DOIs).

IDing George Washington’s descendants

Portrait of George Washington

A DNA study identified descendants of George Washington from unmarked remains. Credit: Public domain

DNA profiling is an incredibly useful tool in forensics, but the most common method—short tandem repeat (STR) analysis—typically doesn’t work when remains are especially degraded, especially if said remains have been preserved with embalming methods using formaldehyde. This includes the remains of US service members who died in such past conflicts as World War II, Korea, Vietnam, and the Cold War. That’s why scientists at the Armed Forces Medical Examiner System’s identification lab at the Dover Air Force Base have developed new DNA sequencing technologies.

They used those methods to identify the previously unmarked remains of descendants of George Washington, according to a March paper published in the journal iScience. The team tested three sets of remains and compared the results with those of a known living descendant, using methods for assessing paternal and maternal relationships, as well as a new method for next-generation sequencing data involving some 95,000 single-nucleotide polymorphisms (SNPs) in order to better predict more distant ancestry. The combined data confirmed that the remains belonged to Washington’s descendants and the new method should help do the same for the remains of as-yet-unidentified service members.

In related news, in July, forensic scientists successfully used descendant DNA to identify a victim of the 1921 Tulsa massacre in Oklahoma City, buried in a mass grave containing more than a hundred victims. C.L. Daniel was a World War I veteran, still in his 20s when he was killed. More than 120 such graves have been found since 2020, with DNA collected from around 30 sets of remains, but this is the first time those remains have been directly linked to the massacre. There are at least 17 other victims in the grave where Daniel’s remains were found.

DOI: iScience, 2024. 10.1016/j.isci.2024.109353 (About DOIs).

Spidey-inspired web-slinging tech

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects. Credit: Marco Lo Presti et al., 2024

Over the years, researchers in Tufts University’s Silklab have come up with all kinds of ingenious bio-inspired uses for the sticky fibers found in silk moth cocoons: adhesive glues, printable sensors, edible coatings, and light-collecting materials for solar cells, to name a few. Their latest innovation is a web-slinging technology inspired by Spider-Man’s ability to shoot webbing from his wrists, described in an October paper published in the journal Advanced Functional Materials.

Coauthor Marco Lo Presti was cleaning glassware with acetone in the lab one day when he noticed something that looked a lot like webbing forming on the bottom of a glass. He realized this could be the key to better replicating spider threads for the purpose of shooting the fibers from a device like Spider-Man—something actual spiders don’t do. (They spin the silk, find a surface, and draw out lines of silk to build webs.)

The team boiled silk moth cocoons in a solution to break them down into proteins called fibroin. The fibroin was then extruded through bore needles into a stream. Spiking the fibroin solution with just the right additives will cause it to solidify into fiber once it comes into contact with air. For the web-slinging technology, they added dopamine to the fibroin solution and then shot it through a needle in which the solution was surrounded by a layer of acetone, which triggered solidification.

The acetone quickly evaporated, leaving just the webbing attached to whatever object it happened it hit. The team tested the resulting fibers and found they could lift a steel bolt, a tube floating on water, a partially buried scalpel and a wooden block—all from as far away as 12 centimeters. Sure, natural spider silk is still about 1000 times stronger than these fibers, but it’s still a significant step forward that paves the way for future novel technological applications.

DOI: Advanced Functional Materials, 2024. 10.1002/adfm.202414219

Solving a mystery of a 12th century supernova

Pa 30 is the supernova remnant of SN 1181.

Pa 30 is the supernova remnant of SN 1181. Credit: unWISE (D. Lang)/CC BY-SA 4.0

In 1181, astronomers in China and Japan recorded the appearance of a “guest star” that shone as bright as Saturn and was visible in the sky for six months. We now know it was a supernova (SN1181), one of only five such known events occurring in our Milky Way. Astronomers got a closer look at the remnant of that supernova and have determined the nature of strange filaments resembling dandelion petals that emanate from a “zombie star” at its center, according to an October paper published in The Astrophysical Journal Letters.

The Chinese and Japanese astronomers only recorded an approximate location for the unusual sighting, and for centuries no one managed to make a confirmed identification of a likely remnant from that supernova. Then, in 2021, astronomers measured the speed of expansion of a nebula known as Pa 30, which enabled them to determine its age: around 1,000 years, roughly coinciding with the recorded appearance of SN1181. PA 30 is an unusual remnant because of its zombie star—most likely itself a remnant of the original white dwarf that produced the supernova.

This latest study relied on data collected by Caltech’s Keck Cosmic Web Imager, a spectrograph at the Keck Observatory in Hawaii. One of the unique features of this instrument is that it can measure the motion of matter in a supernova and use that data to create something akin to a 3D movie of the explosion. The authors were able to create such a 3D map of P 30 and calculated that the zombie star’s filaments have ballistic motion, moving at approximately 1,000 kilometers per second.

Nor has that velocity changed since the explosion, enabling them to date that event almost exactly to 1181. And the findings raised fresh questions—namely, the ejected filament material is asymmetrical—which is unusual for a supernova remnant. The authors suggest that asymmetry may originate with the initial explosion.

There’s also a weird inner gap around the zombie star. Both will be the focus of further research.

DOI: Astrophysical Journal Letters, 2024. 10.3847/2041-8213/ad713b (About DOIs).

Reviving a “lost” 16th century score

manuscript page of Aberdeen Breviary : Volume 1 or 'Pars Hiemalis'

Fragment of music from The Aberdeen Breviary: Volume 1 Credit: National Library of Scotland /CC BY 4.0

Never underestimate the importance of marginalia in old manuscripts. Scholars from the University of Edinburgh and KU Leuven in Belgium can attest to that, having discovered a fragment of “lost” music from 16th-century pre-Reformation Scotland in a collection of worship texts. The team was even able to reconstruct the fragment and record it to get a sense of what music sounded like from that period in northeast Scotland, as detailed in a December paper published in the journal Music and Letters.

King James IV of Scotland commissioned the printing of several copies of The Aberdeen Breviary—a collection of prayers, hymns, readings, and psalms for daily worship—so that his subjects wouldn’t have to import such texts from England or Europe. One 1510 copy, known as the “Glamis copy,” is currently housed in the National Library of Scotland in Edinburgh. It was while examining handwritten annotations in this copy that the authors discovered the musical fragment on a page bound into the book—so it hadn’t been slipped between the pages at a later date.

The team figured out the piece was polyphonic, and then realized it was the tenor part from a harmonization for three or four voices of the hymn “Cultor Dei,” typically sung at night during Lent. (You can listen to a recording of the reconstructed composition here.) The authors also traced some of the history of this copy of The Aberdeen Breviary, including its use at one point by a rural chaplain at Aberdeen Cathedral, before a Scottish Catholic acquired it as a family heirloom.

“Identifying a piece of music is a real ‘Eureka’ moment for musicologists,” said coauthor David Coney of Edinburgh College of Art. “Better still, the fact that our tenor part is a harmony to a well-known melody means we can reconstruct the other missing parts. As a result, from just one line of music scrawled on a blank page, we can hear a hymn that had lain silent for nearly five centuries, a small but precious artifact of Scotland’s musical and religious traditions.”

DOI: Music and Letters, 2024. 10.1093/ml/gcae076 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ten cool science stories we almost missed Read More »