Science

a-fluid-can-store-solar-energy-and-then-release-it-as-heat-months-later

A fluid can store solar energy and then release it as heat months later


Sunlight can cause a molecule to change structure, and then release heat later.

The system works a bit like existing solar water heaters, but with chemical heat storage. Credit: Kypros

Heating accounts for nearly half of the global energy demand, and two-thirds of that is met by burning fossil fuels like natural gas, oil, and coal. Solar energy is a possible alternative, but while we have become reasonably good at storing solar electricity in lithium-ion batteries, we’re not nearly as good at storing heat.

To store heat for days, weeks, or months, you need to trap the energy in the bonds of a molecule that can later release heat on demand. The approach to this particular chemistry problem is called molecular solar thermal (MOST) energy storage. While it has been the next big thing for decades, it never really took off.

In a recent Science paper, a team of researchers from the University of California, Santa Barbara, and UCLA demonstrate a breakthrough that might finally make MOST energy storage effective.

The DNA connection

In the past, MOST energy storage solutions have been plagued by lackluster performance. The molecules either didn’t store enough energy, degraded too quickly, or required toxic solvents that made them impractical. To find a way around these issues, the team led by Han P. Nguyen, a chemist at the University of California, Santa Barbara, drew inspiration from the genetic damage caused by sunburn. The idea was to store energy using a reaction similar to the one that allows UV light to damage DNA.

When you stay out on the beach too long, high-energy ultraviolet light can cause adjacent bases in the DNA (thymine, the T in the genetic code) to link together. This forms a structure known as a (6-4) lesion. When that lesion is exposed to even more UV light, it twists into an even stranger shape called a “Dewar” isomer. In biology, this is rather bad news, as Dewar isomers cause kinks in the DNA’s double-helix spiral that disrupt copying the DNA and can lead to mutations or cancer.

To counter this effect, evolution shaped a specific enzyme called photolyase to hunt (6-4) lesions down and snap them back into their safe, stable forms.

The researchers realized that the Dewar isomer is essentially a molecular battery. This snap-back effect was exactly what Nguyen’s team was looking for, since it releases a lot of heat.

Rechargeable fuel

Molecular batteries, in principle, are extremely good at storing energy. Heating oil, arguably the most popular molecular battery we use for heating, is essentially ancient solar energy stored in chemical bonds. Its energy density stands at around 40 Megajoules per kilo. To put that in perspective, Li-ion batteries usually pack less than one MJ/kg. One of the problems with heating oil, though, is that it is single-use only—it gets burnt when you use it. What Nguyen and her colleagues aimed to achieve with their DNA-inspired substance is essentially a reusable fuel.

To do that, researchers synthesized a derivative of 2-pyrimidone, a chemical cousin of the thymine found in DNA. They engineered this molecule to reliably fold into a Dewar isomer under sunlight and then unfold on command. The result was a rechargeable fuel that could absorb the energy when exposed to sunlight, release it when needed, and return to a “relaxed” state where it’s ready to be charged up again.

Previous attempts at MOST systems have struggled to compete with Li-ion batteries. Norbornadiene, one of the best-studied candidates, tops out at around 0.97 MJ/kg. Another contender, azaborinine, manages only 0.65 MJ/kg. They may be scientifically interesting, but they are not going to heat your house.

Nguyen’s pyrimidone-based system blew those numbers out of the water. The researchers achieved an energy storage density of 1.65 MJ/kg—nearly double the capacity of Li-ion batteries and substantially higher than any previous MOST material.

Double rings

The reason for this jump in performance was what the team called compounded strain.

When the pyrimidone molecule absorbs light, it doesn’t just fold; it twists into a fused, bicyclic structure containing two different four-membered rings: 1,2-dihydroazete and diazetidine. Four-membered rings are under immense structural tension. By fusing them together, the researchers created a molecule that is desperate to snap back into its relaxed state.

Achieving high energy density on paper is one thing. Making it work in the real world is another. A major failing of previous MOST systems is that they are solids that need to be dissolved in solvents like toluene or acetonitrile to work. Solvents are the enemy of energy density—by diluting your fuel to 10 percent concentration, for example, you effectively cut your energy density by 90 percent. Any solvent used means less fuel.

Nguyen’s team tackled this by designing a version of their molecule that is a liquid at room temperature, so it doesn’t need a solvent. This simplified operations considerably, as the liquid fuel could be pumped through a solar collector to charge it up and store it in a tank.

Unlike many organic molecules that hate water, Nguyen’s system is compatible with aqueous environments. This means if a pipe leaks, you aren’t spewing toxic fluids like toluene around your house. The researchers even demonstrated that the molecule could work in water and that its energy release was intense enough to boil it.

The MOST-based heating system, the team says in their paper, would circulate this rechargeable fuel through panels on the roof to capture the sun’s light and then store it in the basement tank. The fuel from this tank would later be pumped to a reaction chamber with an acid catalyst that triggers the energy release. Then, through a heat exchanger, this energy would heat up the water in the standard central heating system.

But there’s a catch.

Looking for the leak

The first hurdle is the spectrum of light that puts energy in the Nguyen’s fuel. The Sun bathes us in a broad spectrum of light, from infrared to ultraviolet. Ideally, a solar collector should use as much of this as possible, but the pyrimidone molecules only absorb light in the UV-A and UV-B range, around 300-310 nm. That represents about five percent of the total solar spectrum. The vast majority of the Sun’s energy, the visible light and the infrared, passes right through Nguyen’s molecules without charging them.

The second problem is quantum yield. This is a fancy way of asking, “For every 100 photons that hit the molecule, how many actually make it switch to the Dewar isomer state?” For these pyrimidones, the answer is a rather underwhelming number, in the single digits. Low quantum yield means the fluid needs a longer exposure to sunlight to get a full charge.

The researchers hypothesize that the molecule has a fast leak, meaning a non-radiative decay path where the excited molecule shakes off the energy as heat immediately instead of twisting into the storage form. Plugging that leak is the next big challenge for the team.

Finally, the team in their experiments used an acid catalyst that was mixed directly into the storage material. The team admits that in a future closed-loop device, this would require a neutralization step—a reaction that eliminates the acidity after the heat is released. Unless the reaction products can be purified away, this will reduce the energy density of the system.

Still, despite the efficiency issues, the stability of the Nguyen’s system looks promising.

The MOST storage?

One of the biggest fears with chemical storage is thermal reversion—the fuel spontaneously discharges because it got a little too warm in the storage tank. But the Dewar isomers of the pyrimidones are incredibly stable. The researchers calculated a half-life of up to 481 days at room temperature for some derivatives. This means the fuel could be charged in the heat of July, and it would remain fully charged when you need to heat your home in January. The degradation figures also look decent for a MOST energy storage. The team ran the system through 20 charge-discharge cycles with negligible decay.

The problem with separating the acid from the fuel could be solved in a practical system by switching to a different catalyst. The scientists suggest in the paper that in this hypothetical setup, the fuel would flow through an acid-functionalized solid surface to release heat, thus eliminating the need for neutralization afterwards.

Still, we’re rather far away using MOST systems for heating actual homes. To get there, we’re going to need molecules that absorb far more of the light spectrum and convert to the activated state with a higher efficiency. We’re just not there yet.

Science, 2026. DOI: 10.1126/science.aec6413

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A fluid can store solar energy and then release it as heat months later Read More »

ancient-mars-was-warm-and-wet,-not-cold-and-icy

Ancient Mars was warm and wet, not cold and icy

This is important because it means these rocks were less likely to have been altered in a hydrothermal environment, where scalding hot water was temporarily released by melting ice caused by volcanism or a meteorite impact.

Instead, they appear to have been altered under modest temperatures and persistent heavy rainfall. The authors found distinct similarities between the chemical composition of these clay pebbles with similar clays found on Earth dating from periods in our planet’s history when the climate was much warmer and wetter.

False colour image of the dried up river delta in Jezero crater, which Perseverance is currently exploring.

Credit: NASA

False colour image of the dried up river delta in Jezero crater, which Perseverance is currently exploring. Credit: NASA

The paper concludes that these kaolinite pebbles were altered under high rainfall conditions comparable to “past greenhouse climates on Earth” and that they “likely represent some of the wettest intervals and possibly most habitable portions of Mars’ history”.

Furthermore, the paper concludes that these conditions may have persisted over time periods ranging from thousands to millions of years. Perseverance recently made headlines also for the discovery of possible biosignatures in samples it collected last year, also from within Jezero crater.

These precious samples have now been cached in special sealed containers on the rover for collection by a future Mars sample return mission. Unfortunately, the mission has recently been cancelled by Nasa and so what vital evidence they may or may not contain will probably not be examined in an Earth-based laboratory for many years.

Crucial to this future analysis is the so-called “Knoll criterion” – a concept formulated by astrobiologist Andrew Knoll, which states that for something to be evidence of life, an observation has to not just be explicable by biology; it has to be inexplicable without it. Whether these samples ever satisfy the Knoll criterion will only be known if they can be brought to Earth.

Either way, it is quite striking to imagine a time on Mars, billions of years before the first humans walked the Earth, that a tropical climate with – possibly – a living ecosystem once existed in the now desolate and wind-swept landscape of Jezero crater.

Gareth Dorrian is a Post Doctoral Research Fellow in Space Science at the University of Birmingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ancient Mars was warm and wet, not cold and icy Read More »

“it-ain’t-no-unicorn”:-these-researchers-have-interviewed-130-bigfoot-hunters

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters

It was the image that launched a cultural icon. In 1967, in the Northern California woods, a 7-foot-tall, ape-like creature covered in black fur and walking upright was captured on camera, at one point turning around to look straight down the lens. The image is endlessly copied in popular culture—it’s even become an emoji. But what was it? A hoax? A bear? Or a real-life example of a mysterious species called the Bigfoot?

The film has been analysed and re-analysed countless times. Although most people believe it was some sort of hoax, there are some who argue that it’s never been definitively debunked. One group of people, dubbed Bigfooters, is so intrigued that they have taken to the forests of Washington, California, Oregon, Ohio, Florida, and beyond to look for evidence of the mythical creature.

But why? That’s what sociologists Jamie Lewis and Andrew Bartlett wanted to uncover. They were itching to understand what prompts this community to spend valuable time and resources looking for a beast that is highly unlikely to even exist. During lockdown, Lewis started interviewing more than 130 Bigfooters (and a few academics) about their views, experiences, and practices, culminating in the duo’s recent book “Bigfooters and Scientific Inquiry: On the Borderlands of Legitimate Science.”

Here, we talk to them about their academic investigation.

What was it about the Bigfoot community that you found so intriguing?

Lewis: It started when I was watching either the Discovery Channel or Animal Planet and a show called Finding Bigfoot was advertised. I was really keen to know why this program was being scheduled on what certainly at the time was a nominally serious and sober natural history channel. The initial plan was to do an analysis of these television programmes, but we felt that wasn’t enough. It was lockdown and my wife was pregnant and in bed a lot with sickness, so I needed to fill my time.

Bartlett: One of the things that I worked on when Jamie and I shared an office in Cardiff was a sociological study of fringe physicists. These are people mostly outside of academic institutions trying to do science. I was interviewing these people, going to their conferences. And that led relatively smoothly into Bigfoot, but it was Jamie’s interest in Bigfoot that brought me to this field.

How big is this community?

Lewis: It’s very hard to put a number on it. There is certainly a divide between what are known as “apers,” who believe that Bigfoot is just a primate unknown to science, and those that are perhaps more derogatorily called “woo-woos,” who believe that Bigfoot is some sort of interdimensional traveller, an alien of sort. We’re talking in the thousands of people. But there are a couple of hundred really serious people of which I probably interviewed at least half.

Many people back them. A YouGov survey conducted as recently as November 2025, suggested that as many as one quarter of Americans believe that Bigfoot either definitely or probably exists.

Were the interviewees suspicious of your intentions?

Lewis: I think there was definitely a worry that they would be caricatured. And I was often asked, “Do I believe in Bigfoot?” I had a standard answer that Andy and I agreed on, which was that mainstream, institutional science says there is absolutely no compelling evidence that Bigfoot exists. We have no reason to dissent with that consensus. But as sociologists what does exist is a community (or communities) of Bigfooting, and that’s what interests us.

“It ain’t no unicorn”: These researchers have interviewed 130 Bigfoot hunters Read More »

nasa-has-a-new-problem-to-fix-before-the-next-artemis-ii-countdown-test

NASA has a new problem to fix before the next Artemis II countdown test

John Honeycutt, chair of NASA’s Artemis II mission management team, said the decision to relax the safety limit between Artemis I and Artemis II was grounded in test data.

“The SLS program, they came up with a test campaign that actually looked at that cavity, the characteristics of the cavity, the purge in the cavity … and they introduced hydrogen to see when you could actually get it to ignite, and at 16 percent, you could not,” said Honeycutt, who served as NASA’s SLS program manager before moving to his new job.

Hydrogen is explosive in high concentrations when mixed with air. This is what makes hydrogen a formidable rocket fuel. But it is also notoriously difficult to contain. Molecular hydrogen is the smallest molecule, meaning it can readily escape through leak paths, and poses a materials challenge for seals because liquified hydrogen is chilled to minus 423 degrees Fahrenheit (minus 253 degrees Celsius).

So, it turns out NASA used the three-year interim between Artemis I and Artemis II to get comfortable with a more significant hydrogen leak, instead of fixing the leaks themselves. Isaacman said that will change before Artemis III, which likewise is probably at least three years away.

“I will say near-conclusively for Artemis III, we will cryoproof the vehicle before it gets to the pad, and the propellant loading interfaces we are troubleshooting will be redesigned,” Isaacman wrote.

Isaacman took over as NASA’s administrator in December, and has criticized the SLS program’s high costestimated by NASA’s inspector general at more than $2 billion per rocket—along with the launch vehicle’s torpid flight rate.

NASA’s expenditures for the rocket’s ground systems at Kennedy Space Center are similarly enormous. NASA spent nearly $900 million on Artemis ground support infrastructure in 2024 alone. Much of the money went toward constructing a new launch platform for an upgraded version of the Space Launch System that may never fly.

All of this makes each SLS rocket a golden egg, a bespoke specimen that must be treated with care because it is too expensive to replace. NASA and Boeing, the prime contractor for the SLS core stage, never built a full-size test model of the core stage. There’s currently no way to completely test the cryogenic interplay between the core stage and ground equipment until the fully assembled rocket is on the launch pad.

Existing law requires NASA continue flying the SLS rocket through the Artemis V mission. Isaacman wrote that the Artemis architecture “will continue to evolve as we learn more and as industry capabilities mature.” In other words, NASA will incorporate newer, cheaper, reusable rockets into the Artemis program.

The next series of launch opportunities for the Artemis II mission begin March 3. If the mission doesn’t lift off in March, NASA will need to roll the rocket back to the Vehicle Assembly Building to refresh its flight termination system. There are more launch dates available in April and May.

“There is still a great deal of work ahead to prepare for this historic mission,” Isaacman wrote. “We will not launch unless we are ready and the safety of our astronauts will remain the highest priority. We will keep everyone informed as NASA prepares to return to the Moon.”

NASA has a new problem to fix before the next Artemis II countdown test Read More »

astronomers-are-filling-in-the-blanks-of-the-kuiper-belt

Astronomers are filling in the blanks of the Kuiper Belt


Are you out there, Planet X?

Next-generation telescopes are mapping this outer frontier.

Credit: NASA/SOFIA/Lynette Cook

Out beyond the orbit of Neptune lies an expansive ring of ancient relics, dynamical enigmas, and possibly a hidden planet—or two.

The Kuiper Belt, a region of frozen debris about 30 to 50 times farther from the sun than the Earth is—and perhaps farther, though nobody knows—has been shrouded in mystery since it first came into view in the 1990s.

Over the past 30 years, astronomers have cataloged about 4,000 Kuiper Belt objects (KBOs), including a smattering of dwarf worlds, icy comets, and leftover planet parts. But that number is expected to increase tenfold in the coming years as observations from more advanced telescopes pour in. In particular, the Vera C. Rubin Observatory in Chile will illuminate this murky region with its flagship project, the Legacy Survey of Space and Time (LSST), which began operating last year. Other next-generation observatories, such as the James Webb Space Telescope (JWST), will also help to bring the belt into focus.

“Beyond Neptune, we have a census of what’s out there in the solar system, but it’s a patchwork of surveys, and it leaves a lot of room for things that might be there that have been missed,” says Renu Malhotra, who serves as Louise Foucar Marshall Science Research Professor and Regents Professor of Planetary Sciences at the University of Arizona.

“I think that’s the big thing that Rubin is going to do—fill out the gaps in our knowledge of the contents of the solar system,” she adds. “It’s going to greatly advance our census and our knowledge of the contents of the solar system.”

As a consequence, astronomers are preparing for a flood of discoveries from this new frontier, which could shed light on a host of outstanding questions. Are there new planets hidden in the belt, or lurking beyond it? How far does this region extend? And are there traces of cataclysmic past encounters between worlds—both homegrown or from interstellar space—imprinted in this largely pristine collection of objects from the deep past?

“I think this will become a very hot field very soon, because of LSST,” says Amir Siraj, a graduate student at Princeton University who studies the Kuiper Belt.

The Kuiper Belt is a graveyard of planetary odds and ends that were scattered far from the sun during the messy birth of the solar system some 4.6 billion years ago. Pluto was the first KBO ever spotted, more than a half-century before the belt itself was discovered.

Since the 1990s, astronomers have found a handful of other dwarf planets in the belt, such as Eris and Sedna, along with thousands of smaller objects. While the Kuiper Belt is not completely static, it is, for the most part, an intact time capsule of the early solar system that can be mined for clues about planet formation.

For example, the belt contains weird structures that may be signatures of past encounters between giant planets, including one particular cluster of objects, known as a “kernel,” located at about 44 astronomical units (AU), where one AU is the distance between Earth and the sun (about 93 million miles).

While the origin of this kernel is still unexplained, one popular hypothesis is that its constituent objects—which are known as cold classicals—were pulled along by Neptune’s outward migration through the solar system more than 4 billion years ago, which may have been a bumpy ride.

The idea is that “Neptune got jiggled by the rest of the gas giants and did a bit of a jump; it’s called the ‘jumping Neptune’ scenario,” says Wes Fraser, an astronomer at the Dominion Astrophysical Observatory, National Research Council of Canada, who studies the Kuiper Belt, noting that astronomer David Nesvorný came up with the idea.

“Imagine a snowplow driving along a highway, and lifting up the plow. It leaves a clump of snow behind,” he adds. “That same sort of idea is what left the clump of cold classicals behind. That is the kernel.”

In other words, Neptune tugged these objects along with it as it migrated outward, but then broke its gravitational hold over them when it “jumped,” leaving them to settle into the Kuiper Belt in the distinctive Neptune-sculpted kernel pattern that remains intact to this day.

Last year, Siraj and his advisers at Princeton set out to look for other hidden structures in the Kuiper Belt with a new algorithm that analyzed 1,650 KBOs—about 10 times as many objects as the 2011 study, led by Jean-Robert Petit, that first identified the kernel.

The results consistently confirmed the presence of the original kernel, while also revealing a possibly new “inner kernel” located at about 43 AU, though more research is needed to confirm this finding, according to the team’s 2025 study.

“You have these two clumps, basically, at 43 and 44 AU,” Siraj explains. “It’s unclear whether they’re part of the same structure,” but “either way, it’s another clue about, perhaps, Neptune’s migration, or some other process that formed these clumps.”

As Rubin and other telescopes discover thousands more KBOs in the coming years, the nature and possible origin of these mysterious structures in the belt may become clearer, potentially opening new windows into the tumultuous origins of our solar system.

In addition to reconstructing the early lives of the known planets, astronomers who study the Kuiper Belt are racing to spot unknown planets. The most famous example is the hypothetical giant world known as Planet Nine or Planet X, first proposed in 2016. Some scientists have suggested that the gravitational influence of this planet, if it exists, might explain strangely clustered orbits within the Kuiper Belt, though this speculative world would be located well beyond the belt, at several hundred AU.

Siraj and his colleagues have also speculated about the possibility of a Mercury- or Mars-sized world, dubbed Planet Y, that may be closer to the belt, at around 80 to 200 AU, according to their 2025 study. Rubin is capable of spotting these hypothetical worlds, though it may be challenging to anticipate the properties of planets that lurk this far from the sun.

“We know nothing about the atmospheres and surfaces of gas giant or ice giant type planets at 200, 300, or 400 AU,” Fraser says. “We know nothing about their chemistry. Every single time we look at an exoplanet, it behaves differently than what our models predict.”

“I think Planet Nine might very well just be a tar ball that is so dark that we can’t see it, and that’s why it hasn’t been discovered yet,” he adds. “If we found that, I wouldn’t be too surprised. And who knows what an Earth [in the belt] would look like? Certainly the compositional makeup will be different than a Mars, or an Earth, or a Venus, in the inner solar system.”

Observatories like Rubin and JWST may fill in these tantalizing gaps in our knowledge of the Kuiper Belt, and perhaps pinpoint hidden planets. But even if these telescopes reveal an absence of planets, it would be a breakthrough.

“There’s a lot of room for discovery of large bodies,” says Malhotra. “That would be awesome, but if we don’t find any, that would tell us something as well.”

“Not finding them up to some distance would give us estimates of how efficient or inefficient the planet formation process was,” she adds. “It would fill in some of the uncertainties that we have in our models.”

One other major open question about the Kuiper Belt is the extent of its boundaries. The belt suddenly tapers off at about 50 AU, an edge called the Kuiper cliff. This is a puzzling feature, because it suggests that our solar system has an anomalously small debris belt compared with other systems.

“The solar system looks kind of weird,” Fraser says. “The Kuiper cliff is a somewhat sharp delineation. Beyond that, we have no evidence that there was a disk of material. And yet, if you look at other stellar systems that have debris disks, the vast majority of those are significantly larger.”

“If we were to find a debris disk at, say, 100 AU, that would immediately make the solar system not weird, and quite average at that point,” he notes.

In 2024, Fraser and his colleagues presented hints of a possible undiscovered population of objects that may exist at about 100 AU—though he emphasizes that these are candidate detections, and are not yet confirmed to be a hidden outer ring.

However, even Rubin may not be able to resolve the presence of the tiny and distant objects that could represent a new outer limit of the Kuiper Belt. Time will tell.

As astronomers gear up for this major step change in our understanding of the Kuiper Belt, answers to some of our most fundamental questions hang in the balance. With its immaculate record of the early solar system, this region preserves secrets from the deep past. Here there are probably not dragons, but there may well be hidden planets, otherworldly structures, and discoveries that haven’t yet been imagined.

“I’d say the big question is, what’s out there?” Malhotra says. “What are we missing?”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Astronomers are filling in the blanks of the Kuiper Belt Read More »

when-amazon-badly-needed-a-ride,-europe’s-ariane-6-rocket-delivered

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered

The Ariane 64 flew with an extended payload shroud to fit all 32 Amazon Leo satellites. Combined, the payload totaled around 20 metric tons, or about 44,000 pounds, according to Arianespace. This is close to maxing out the Ariane 64’s lift capability.

Amazon has booked more than 100 missions across four launch providers to populate the company’s planned fleet of more than 3,200 satellites. With Thursday’s launch, Amazon has launched 214 production satellites on eight missions with United Launch Alliance, SpaceX, and now Arianespace.

The Amazon Leo constellation is a competitor with SpaceX’s Starlink Internet network. SpaceX now has more than 9,000 satellites in orbit beaming broadband to more than 9 million subscribers, and all have launched on the company’s own Falcon 9 rockets. Amazon, meanwhile, initially bypassed SpaceX when selecting which companies would launch satellites for the Amazon Leo program, formerly known as Project Kuiper.

Amazon booked the last nine launches on ULA’s soon-to-retire Atlas V, five of which have now flown, and reserved the rest of its launches in 2022 on rockets that had never launched before: 38 flights on ULA’s new Vulcan rocket, 24 launches on Blue Origin’s New Glenn, and 18 on Europe’s Ariane 6.

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment.

Credit: Arianespace

An artist’s illustration of the Ariane 6’s upper stage in orbit with a stack of Amazon Leo satellites awaiting deployment. Credit: Arianespace

Meanwhile, in Florida

All three new rockets suffered delays but are now in service. The Ariane 6 has enjoyed the fastest ramp-up in launch cadence, with six flights under its belt after Thursday’s mission from French Guiana. ULA’s Vulcan rocket has flown four times, and Amazon says its first batch of satellites to fly on Vulcan is now complete. But a malfunction with one of the Vulcan launcher’s solid rocket boosters on a military launch from Florida early Thursday—the second such anomaly in three flights—raises questions about when Amazon will get its first ride on Vulcan.

Blue Origin, owned by Amazon founder Jeff Bezos, is gearing up for the third flight of its heavy-lift New Glenn rocket from Florida as soon as next month. Amazon and Blue Origin have not announced when the first group of Amazon Leo satellites will launch on New Glenn.

When Amazon badly needed a ride, Europe’s Ariane 6 rocket delivered Read More »

tiny,-45-base-long-rna-can-make-copies-of-itself

Tiny, 45 base long RNA can make copies of itself


Self-copying RNAs may have been a key stop along the pathway to life.

By base pairing with themselves, RNAs can form complex structures with enzymatic activity. Credit: Laguna Design

There are plenty of unanswered questions about the origin of life on Earth. But the research community has largely reached consensus that one of the key steps was the emergence of an RNA molecule that could replicate itself. RNA, like its more famous relative DNA, can carry genetic information. But it can also fold up into three-dimensional structures that act as catalysts. These two features have led to the suggestion that early life was protein-free, with RNA handling both heredity and catalyzing a simple metabolism.

For this to work, one of the reactions that the early RNAs would need to catalyze is the copying of RNA molecules, without which any sort of heritability would be impossible. While we’ve found a number of catalytic RNAs that can copy other molecules, none have been able to perform a key reaction: making a copy of themselves. Now, however, a team has found an incredibly short piece of RNA—just 45 bases long—that can make a copy of itself.

Finding an RNA polymerase

We have identified a large number of catalytic RNAs (generically called ribozymes, for RNA-based enzymes), and some of them can catalyze reactions involving other RNAs. A handful of these are ligases, which link together two RNA molecules. In some cases, they need these molecules to be held together by a third RNA molecule that base pairs with both of them. We’ve only identified a few that can act as polymerases, which add RNA bases to a growing molecule, one at a time, with each new addition base pairing with a template molecule.

Black on white image showing 3 different enzymatic activities. One links any two nucleic acid strands, the other only links base paired strands, and the third links one base at a time.

Some ligases can link two nucleic acid strands (left), while others can link the strands only if they’re held together by base pairing with a template (center). A polymerase can be thought of as a template-dependent ligase that adds one base at a time. The newly discovered ribozyme sits somewhere between a template-directed ligase and a polymerase.

Credit: John Timmer

Some ligases can link two nucleic acid strands (left), while others can link the strands only if they’re held together by base pairing with a template (center). A polymerase can be thought of as a template-dependent ligase that adds one base at a time. The newly discovered ribozyme sits somewhere between a template-directed ligase and a polymerase. Credit: John Timmer

Obviously, there is some functional overlap between them, as you can think of a polymerase as ligating on one base at a time. And in fact, at the ribozyme level, there’s some real-world overlap, as some ribozymes that were first identified as ligases were converted into polymerases by selecting for this new function.

While this is fascinating, there are a few problems with these known examples of polymerase ribozymes. One is that they’re long. So long, in fact, that they’re beyond the length of the sort of molecules that we’ve observed forming spontaneously from a mix of individual RNA bases. This length also means they’re largely incapable of making copies of themselves—the reactions are slow and inefficient enough that they simply stop before copying the entire molecule.

Another factor related to their length is that they tend to form very complex structures, with many different areas of the molecule base-paired to one another. That leaves very little of the molecule in a single-stranded form, which is needed to make a copy.

Based on past successes, a French-UK team decided to start a search for a polymerase by looking for a ligase. And they limited that search in an important way: They only tested short molecules. They started with pools of RNA molecules, each with a different random sequence, ranging from 40 to 80 bases. Overall, they estimated that they made a population of 1013 molecules out of the total possible population of 1024 sequences of this type.

These random molecules were fed a collection of three-base-long RNAs, each linked to a chemical tag. The idea was that if a molecule is capable of ligating one of these short RNA fragments to itself, it could be pulled out using the tag. The mixtures were then placed in a salty mixture of water and ice, as this can promote reactions involving RNAs.

After 11 rounds of reactions and tag-based purification, the researchers ended up with three different RNA molecules that could each ligate three-base-long RNAs to existing molecules. Each of these molecules was subjected to mutagenesis and further rounds of selection. This ultimately left the researchers with a single, 51-base-long molecule that could add clusters of three bases to a growing RNA strand, depending on their ability to base-pair with an RNA template. They called this “polymerase QT-51,” with QT standing for “quite tiny.” They later found that they could shorten this to QT-45 without losing significant enzyme activity.

Checking its function

The basic characterization of QT-45 showed that it has some very impressive properties for a molecule that, by nucleic acid standards, is indeed quite tiny. While it was selected for linking collections of molecules that were three bases long, it could also link longer RNAs, work on shorter two-base molecules, or even add a single base at a time, though this was less efficient. While it worked slowly, the molecule’s active half-life was well over 100 days, so it had plenty of time to get things done before it degraded.

It also didn’t need to interact with any specific RNA sequences to work, suggesting it had a general affinity for RNA molecules. As a result, it wasn’t especially picky about the sequences it could copy.

As you might expect from such a small molecule, QT-45 didn’t tolerate changes to its own sequence very well—nearly the entire molecule was important in one way or another. Tests that involved changing every single individual base one at a time showed that almost all the changes reduced the ribozyme’s activity. There were, however, a handful of changes that improved things, suggesting that further selection could potentially yield additional improvements. And the impact of mutations near the center of the sequence was far more severe, suggesting that region is critical for QT-45’s enzymatic activity.

The team then started testing its ability to synthesize copies of other RNA molecules when given a mixture of all possible three-base sequences. One of the tests included a large stretch in which one end of the sequence base-paired with the other. To copy that, those base pairs need to somehow be pried apart. But QT-45 was able to make a copy, meaning it synthesized a strand that was able to base pair with the original.

It was also able to make a copy of a template strand that would base pair with a small ribozyme. That copying produced an active ribozyme.

But the key finding was that it could synthesize a sequence that base-pairs with itself, and then synthesize itself by copying that sequence. This was horribly inefficient and took months, but it happened.

Throughout these experiments, the fidelity averaged about 95 percent, meaning that, in copying itself, it would make an average of two to three errors. While this means a fair number of its copies wouldn’t be functional, it also means the raw materials for an evolutionary selection for improved function—random mutations—would be present.

What this means

It’s worth taking a moment to consider the use of three-base RNA fragments by this enzyme. On the surface, this may seem a bit like cheating, since current RNA polymerases add sequence one base at a time. But in reality, any chemical environment that could spontaneously assemble an RNA molecule 45 bases long will produce many fragments shorter than that. So in many ways, this might be a more realistic model of the conditions in which life emerged.

The authors note that these shorter fragments may be essential for QT-45’s activity. The short ribozyme probably doesn’t have the ability to enzymatically pry base-paired strands of RNA apart to copy them. But in a mixture of lots of small fragments, there’s likely to be an equilibrium, with some base-paired sequences spontaneously popping open and temporarily base pairing with a shorter fragment. Working with these base-paired fragments is probably essential to the ribozyme’s overall activity.

Right now, QT-45 isn’t an impressive enzyme. But the researchers point out that it has only been through 18 rounds of selection, which isn’t much. The most efficient ribozyme polymerases we have at present have been worked on by multiple labs for years. I expect QT-45 to receive similar attention and improve significantly over time.

Also notable is that the team came up with three different ligases in a test of just a small subset of the possible total RNA population of this size. If that frequency holds, there are on the order of 1011 ligating ribozymes among the sequences of this size. Which raises the possibility that we could find far more if we do an exhaustive search. That suggests the first self-copying RNA might not be as improbable as it seems at first.

Science, 2026. DOI: 10.1126/science.adt2760  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Tiny, 45 base long RNA can make copies of itself Read More »

bringing-the-“functionally-extinct”-american-chestnut-back-from-the-dead

Bringing the “functionally extinct” American chestnut back from the dead


Wiped out in its native range by invasive pathogens, the trees may make a comeback.

Very few people alive today have seen the Appalachian forests as they existed a century ago. Even as state and national parks preserved ever more of the ecosystem, fungal pathogens from Asia nearly wiped out one of the dominant species of these forests, the American chestnut, killing an estimated 3 billion trees. While new saplings continue to sprout from the stumps of the former trees, the fungus persists, killing them before they can seed a new generation.

But thanks in part to trees planted in areas where the two fungi don’t grow well, the American chestnut isn’t extinct. And efforts to revive it in its native range have continued, despite the long generation times needed to breed resistant trees. In Thursday’s issue of Science, researchers describe their efforts to apply modern genomic techniques and exhaustive testing to identify the best route to restoring chestnuts to their native range.

Multiple paths to restoration

While the American chestnut is functionally extinct—it’s no longer a participant in the ecosystems it once dominated—it’s most certainly not extinct. Two Asian fungi that have killed it off in its native range; one causes chestnut blight, while a less common pathogen causes a root rot disease. Both prefer warmer, humid environments and persist there because they can grow asymptomatically on distantly related trees, such as oaks. Still, chestnuts planted outside the species’ original range—primarily in drier areas of western North America—have continued to thrive.

There is also a virus that attacks the chestnut blight fungus, allowing a few trees to survive in areas where that virus is common. Finally, a handful of trees have grown to maturity in the American chestnut’s original range. These trees, which the paper refers to as LSACs (large surviving American chestnuts), suggest that there might have been some low level of natural resistance within the now-vanished population.

Those trees are central to one of the efforts to restore the American chestnut. If enough of them have distinct means of resisting the fungi, interbreeding them might produce a strain that not only survives the fungi but can also thrive in the Appalachians.

A related approach took advantage of the fact that the American chestnut can produce fertile hybrids with the Chinese chestnut, which had co-evolved with the introduced fungi and were thus resistant to lethal infections. The hope was that continued back-breeding of these hybrids with American chestnuts would result in trees that were very similar to American chestnuts yet retained the fungal resistance of their Asian cousins.

Both efforts suffered from the same problem that faces any biologist working on trees: They are slow-growing and can take years to reach a size at which they produce seeds. The situation was further complicated by the fact that the American chestnut can’t pollinate itself, so you need at least two trees before any breeding is possible.

Concerned about what this might mean for the potential reintroduction of the chestnut into the Appalachians, a third project turned to biotechnology. Research had identified oxalic acid as a key factor in the blight’s virulence. Wheat naturally produces an enzyme that degrades oxalic acid, and researchers inserted the gene that encodes that enzyme into the American chestnut genome, creating a genetically modified tree that can potentially disarm the fungus’ attack.

Without understanding the nature of resistance or the effectiveness of the transgenic gene, there’s no way to know which method would be most effective. So researchers from the American Chestnut Foundation assembled a massive collaboration to examine all these options and determine what would be needed to reintroduce blight-resistant chestnuts into the wild.

Tracking resistance

The scale of the effort is immense. All told, the team infected over 4,000 individual trees with the blight fungus and tracked their growth in Appalachian nurseries for an average of over 14 years. The trees were scored for resistance on a zero-to-100 scale based on the damage caused by the infection. This data was combined with some serious lab work; the team produced the highest-quality chestnut genomes yet (of both American and Chinese species) and gathered biochemical data on how the trees respond to infection.

It quickly became apparent that there were significant differences in the growth rates of some of the resistant trees. When planted at sites where viruses kept the blight in check, the Chinese chestnuts grew more slowly than native trees, while hybrids grew at an intermediate rate. That could make a big difference, as rapid growth may have enabled the chestnut to reach its former dominance of the canopy.

Somewhat surprisingly, this slow growth turned out to be a problem for the genetically modified American chestnuts as well. By chance, the wheat gene ended up being inserted into a gene known to be important for the growth of other plants. It seems to be important in the chestnut as well; plants with two copies of the inserted genes survived at 16 percent of their expected rate, and those with a single copy grew 22 percent slower than unmodified trees.

That said, there was a lot of variability among the genetically modified trees, with 4 percent of the tested trees showing both high blight resistance and growth comparable to that of unmodified American chestnuts. It will be important to determine whether this collection of traits remains consistent in ensuing generations.

In a bit of good news, the progeny from surviving American chestnuts grew like American chestnuts. In less good news, among 143 of these trees, only seven had resistance levels of above 50 on the team’s 100-point scale. It’s possible that interbreeding these trees could further boost resistance, but it also poses the risk of creating a population that’s too inbred to thrive after reintroduction.

Root causes

The research team decided to use their testing to investigate the genetic basis of resistance. There’s a very practical reason for this: If resistance is mediated by just a handful of genes that each have large impacts, it should be possible to continue breeding resistant strains back to regular American chestnuts and selecting for resistance. But if there are many factors with relatively small impacts, it will require directed interbreeding of hybrids to maximize both resistance and DNA originating from the American chestnut.

The team completed the highest-quality chestnut genomes for both the American and Chinese species, identifying about 25,000 to 30,000 genes in the different assemblies. They then used this information for two types of genetic analysis: quantitative trait locus identification and genome-wide association. Both approaches aim to identify regions of the genome associated with specific properties and estimate their impact.

The work suggested that resistance arises from a relatively large number of sites, each with relatively minor effects. For example, the sites in the genome identified by quantitative trait analysis typically boosted resistance by about 10 points on the researchers’ 100-point scale. In the genome-wide analysis, 17 individual genetic differences were associated with about a quarter of the heritable resistance traits. All of this suggests that, for the hybrids (and likely for the weaker blight resistance found in surviving American chestnuts), directed breeding among surviving trees will be needed.

For the root rot fungus, in contrast, it looks like there are a limited number of important alleles with a large impact.

The researchers also took an alternative approach to identify resistance factors, comparing 100 chemicals produced by resistant and susceptible strains. Among the 41 chemicals detected at higher levels in the Chinese chestnut, the researchers found a metabolite, lupeol, that completely suppressed the growth of the fungal pathogen. Another, erythrodiol, limited its growth. If we can identify the genes involved in producing those chemicals, we could use that knowledge to guide directed breeding programs—or even engage in gene editing to increase their production.

The team’s current plan is to use genomic predictions to select hybrid seedlings for planting in test orchards, aiming to identify plants with high growth and resistance. From there, the process can be repeated. But even after the exhaustive exploration of resistance traits, the researchers seem to believe that all three approaches—selecting resistant American chestnuts, breeding hybrids derived from Chinese chestnuts, and directed genetic modification—can help bring the American chestnut back.

The researchers warn, though, that as environmental disturbances and invasive species continue to push some key species to the brink of extinction, we need to get better at this kind of species rescue operation.

Science, 2026. DOI: 10.1126/science.adw3225  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Bringing the “functionally extinct” American chestnut back from the dead Read More »

did-seabird-poop-fuel-rise-of-chincha-in-peru?

Did seabird poop fuel rise of Chincha in Peru?

A nutrient-rich natural fertilizer

Now Bongers has turned his attention to analyzing the biochemical signatures of 35 maize samples excavated from buried tombs in the region. He and his co-authors found significantly higher levels of nitrogen in the maize than in the natural soil conditions, suggesting the Chincha used guano as a natural fertilizer. The guano from such birds as the guanay cormorant, the Peruvian pelican, and the Peruvian booby contains all the essential growing nutrients: nitrogen, phosphorus, and potassium. All three species are abundant on the Chincha Islands, all within 25 kilometers of the kingdom.

Those results were further bolstered by historical written sources describing how seabird guano was collected and its importance for trade and production of food. For instance, during colonial eras, groups would sail to nearby islands on rafts to collect bird droppings to use as crop fertilizer. The Lunahuana people in the Canete Valley just outside of Chincha were known to use bird guano in their fields, and the Inca valued the stuff so highly that it restricted access to the islands during breeding season and forbade the killing of the guano-producing birds on penalty of death.

The 19th-century Swiss naturalist Johann Jakob von Tschudi also reported observing the guano being used as fertilizer, with a fist-sized amount added to each plant before submerging entire fields in water. It was even imported to the US. The authors also pointed out that much of the iconography from Chincha and nearby valleys featured seabirds: textiles, ceramics, balance-beam scales, spindles, decorated gourds, adobe friezes and wall paintings, ceremonial wooden paddles, and gold and silver metalworks.

“The true power of the Chincha wasn’t just access to a resource; it was their mastery of a complex ecological system,” said co-author Jo Osborn of Texas A&M University. “They possessed the traditional knowledge to see the connection between marine and terrestrial life, and they turned that knowledge into the agricultural surplus that built their kingdom. Their art celebrates this connection, showing us that their power was rooted in ecological wisdom, not just gold or silver.”

PLoS ONE, 2026. DOI: 10.1371/journal.pone.0341263 (About DOIs).

Did seabird poop fuel rise of Chincha in Peru? Read More »

after-republican-complaints,-judicial-body-pulls-climate-advice

After Republican complaints, judicial body pulls climate advice

In short, the state attorneys general object to the document treating facts as facts, as there have been lawsuits that contested them. “Among other things, the Manual states that human activities have ‘unequivocally warmed the climate,’ that it is ‘extremely likely’ human influence drives ocean warming, and that researchers are ‘virtually certain’ about ocean acidification,” their letter states, “treating contested litigation positions as settled fact.” In other words, they’re arguing that, if someone is ignorant enough to start a suit based on ignorance of well-established science, then the Federal Judicial Center should join them in their ignorance.

The attorneys general also complain that the report calls the Intergovernmental Panel on Climate Change an “authoritative science body,” citing a conservative Canadian public policy think tank that disagreed with that assessment.

These complaints were mixed in with some more potentially reasonable complaints about how the climate chapter gave specific suggestions on how to legally approach some issues and assigned significance to one or two recent studies that haven’t yet been validated by follow-on work. But the letter’s authors would not settle for revisions based on a few reasonable complaints; instead, they demand the entire chapter be removed because it accurately reflects the status of climate science.

Naturally, the Federal Judicial Center has agreed. We have confirmed that the current version of the document no longer includes a chapter on climate science, even though the foreword by Supreme Court Justice Elana Kagan still mentions it. The full text of the now-deleted chapter has been posted by the RealClimate blog, though.

After Republican complaints, judicial body pulls climate advice Read More »

nih-head,-still-angry-about-covid,-wants-a-second-scientific-revolution

NIH head, still angry about COVID, wants a second scientific revolution


Can we pander to MAHA, re-litigate COVID, and improve science at the same time?

Image of a man with grey hair and glasses, wearing a suit, gesturing as he talks.

Bhattacharya speaks before the Senate shortly after the MAHA event. Credit: Chip Somodevilla

Bhattacharya speaks before the Senate shortly after the MAHA event. Credit: Chip Somodevilla

At the end of January, Washington, DC, saw an extremely unusual event. The MAHA Institute, which was set up to advocate for some of the most profoundly unscientific ideas of our time, hosted leaders of the best-funded scientific organization on the planet, the National Institutes of Health. Instead of a hostile reception, however, Jay Bhattacharya, the head of the NIH, was greeted as a hero by the audience, receiving a partial standing ovation when he rose to speak.

Over the ensuing five hours, the NIH leadership and MAHA Institute moderators found many areas of common ground: anger over pandemic-era decisions, a focus on the failures of the health care system, the idea that we might eat our way out of some health issues, the sense that science had lost people’s trust, and so on. And Bhattacharya and others clearly shaped their messages to resonate with their audience.

The reason? MAHA (Make America Healthy Again) is likely to be one of the only political constituencies supporting Bhattacharya’s main project, which he called a “second scientific revolution.”

In practical terms, Bhattacharya’s plan for implementing this revolution includes some good ideas that fall far short of a revolution. But his motivation for the whole thing seems to be lingering anger over the pandemic response—something his revolution wouldn’t address. And his desire to shoehorn it into the radical disruption of scientific research pursued by the Trump administration led to all sorts of inconsistencies between his claims and reality.

If this whole narrative seems long, complicated, and confusing, it’s probably a good preview of what we can expect from the NIH over the next few years.

MAHA meets science

Despite the attendance of several senior NIH staff (including the directors of the National Cancer Institute and National Institute of Allergy and Infectious Diseases) and Bhattacharya himself, this was clearly a MAHA event. One of the MAHA Institute’s VPs introduced the event as being about the “reclaimation” of a “discredited” NIH that had “gradually given up its integrity.”

“This was not a reclamation that involved people like Anthony Fauci,” she went on to say. “It was a reclamation of ordinary Americans, men and women who wanted our nation to excel in science rather than weaponize it.”

Things got a bit strange. Moderators from the MAHA Institute asked questions about whether COVID vaccines could cause cancer and raised the possibility of a lab leak favorably. An audience member asked why alternative treatments aren’t being researched. A speaker who proudly announced that he and his family had never received a COVID vaccine was roundly applauded. Fifteen minutes of the afternoon were devoted to a novelist seeking funding for a satirical film about the pandemic that portrayed Anthony Fauci as an egomaniacal lightweight, vaccines as a sort of placebo, and Bhattacharya as the hero of the story.

The organizers also had some idea of who might give all of this a hostile review, as reporters from Nature and Science said they were denied entry.

In short, this was not an event you’d go to if you were interested in making serious improvements to the scientific method. But that’s exactly how Bhattacharya treated it, spending the afternoon not only justifying the changes he’s made within the NIH but also arguing that we’re in need of a second scientific revolution—and he’s just the guy to bring it about.

Here’s an extensive section of his introduction to the idea:

I want to launch the second scientific revolution.

Why this grandiose vision? The first scientific revolution you have… very broadly speaking, you had high ecclesiastical authority deciding what was true or false on physical, scientific reality. And the first scientific revolution basically took… the truth-making power out of the hands of high ecclesiastical authority for deciding physical truth. We can leave aside spiritual—that is a different thing—physical truth and put it in the hands of people with telescopes. It democratized science fundamentally, it took the hands of power to decide what’s true out of the hands of authority and put it in the hands of ridiculous geniuses and regular people.

The second scientific revolution, then, is very similar. The COVID crisis, if it was anything, was the crisis of high scientific authority geting to decide not just a scientific truth like “plexiglass is going to protect us from COVID” or something, but also essentially spiritual truth. How should we treat our neighbor? Well, we treat our neighbor as a mere biohazzard.

The second scientific revolution, then, is the replication revolution. Rather than using the metrics of how many papers are we publishing as a metric for success, instead, what we’ll look at as a metric for successful scientific idea is ‘do you have an idea where other people [who are] looking at the same idea tend to find the same thing as you?’ It is not just narrow replication of one paper or one idea. It’s a really broad science. It includes, for instance, reproduction. So if two scientists disagree, that often leads to constructive ways forward in science—deciding, well there some new ideas that may come out of that disagreement

That section, which came early in his first talk of the day, hit on themes that would resurface throughout the afternoon: These people are angry about how the pandemic was handled, they’re trying to use that anger to fuel fundamental change in how science is done in the US, and their plan for change has nearly nothing to do with the issues that made them angry in the first place. In view of this, laying everything out for the MAHA crowd actually does make sense. They’re a suddenly powerful political constituency that also wants to see fundamental change in the scientific establishment, and they are completely unbothered by any lack of intellectual coherence.

Some good

The problem Bhattacharya believes he identified in the COVID response has nothing to do with replication problems. Even if better-replicated studies ultimately serve as a more effective guide to scientific truth, it would do little to change the fact that COVID restrictions were policy decisions largely made before relevant studies could even be completed, much less replicated. That’s a serious incoherence that needs to be acknowledged up front.

But that incoherence doesn’t prevent some of Bhattacharya’s ideas on replication and research priorities from being good. If they were all he was trying to accomplish, he could be a net positive.

Although he is a health economist, Bhattacharya correctly recognized something many people outside science don’t: Replication rarely comes from simply repeating the same set of experiments twice. Instead, many forms of replication happen by poking at the same underlying problem from multiple directions—looking in different populations, trying slightly different approaches, and so on. And if two approaches give different answers, it doesn’t mean that either of them is wrong. Instead, the differences could be informative, revealing something fundamental about how the system operates, as Bhattacharya noted.

He is also correct that simply changing the NIH to allow it to fund more replicative work probably won’t make a difference on its own. Instead, the culture of science needs to change so that replication can lead to publications that are valued for prestige, job security, and promotions—something that will only come slowly. He is also interested in attaching similar value to publishing negative results, like failed hypotheses or problems that people can’t address with existing technologies.

The National Institutes of Health campus.

The National Institutes of Health campus. Credit: NIH

Bhattacharya also spent some time discussing the fact that NIH grants have become very risk-averse, an issue frequently discussed by scientists themselves. This aversion is largely derived from the NIH’s desire to ensure that every grant will produce some useful results—something the agency values as a way to demonstrate to Congress that its budget is being spent productively. But it leaves little space for exploratory science or experiments that may not work for technical reasons. Bhattacharya hopes to change that by converting some five-year grants to a two-plus-three structure, where the first two years fund exploratory work that must prove successful for the remaining three years to be funded.

I’m skeptical that this would be as useful as Bhattacharya hopes. Researchers who already have reason to believe the “exploratory” portion will work are likely to apply, and others may find ways to frame results from the exploratory phase as a success. Still, it seems worthwhile to try to fund some riskier research.

There was also talk of providing greater support for young researchers, another longstanding issue. Bhattacharya also wants to ensure that the advances driven by NIH-funded research are more accessible to the public and not limited to those who can afford excessively expensive treatments—again, a positive idea. But he did not share a concrete plan for addressing these issues.

All of this is to say that Bhattacharya has some ideas that may be positive for the NIH and science more generally, even if they fall far short of starting a second scientific revolution. But they’re embedded in a perspective that’s intellectually incoherent and seems to demand far more than tinkering around the edges of reproducibility. And the power to implement his ideas comes from two entities—the MAHA movement and the Trump administration—that are already driving changes that go far beyond what Bhattacharya says he wants to achieve. Those changes will certainly harm science.

Why a revolution?

There are many potential problems with deciding that pandemic-era policy decisions necessitate a scientific revolution. The most significant is that the decisions, again, were fundamentally policy decisions, meaning they were value-driven as much as fact-driven. Bhattacharya is clearly aware of that, complaining repeatedly that his concerns were moral in nature. He also claimed that “during the pandemic, what we found was that the engines of science were used for social control” and that “the lockdowns were so far at odds with human liberty.”

He may be upset that, in his view, scientists intrude upon spiritual truth and personal liberty when recommending policy, but that has nothing to do with how science operates. It’s unclear how changing how scientists prioritize reproducibility would prevent policy decisions he doesn’t like. That disconnect means that even when Bhattacharya is aiming at worthwhile scientific goals, he’s doing so accidentally rather than in a way that will produce useful results.

This is all based on a key belief of Bhattacharya and his allies: that they were right about both the science of the pandemic and the ethical implications of pandemic policies. The latter is highly debatable, and many people would disagree with them about how to navigate the trade-offs between preserving human lives and maximizing personal freedoms.

But there are also many indications that these people are wrong about the science. Bhattacharya acknowledged the existence of long COVID but doesn’t seem to have wrestled with what his preferred policy—encouraging rapid infection among low-risk individuals—might have meant for long COVID incidence, especially given that vaccines appear to reduce the risk of developing it.

Matthew Memoli, acting NIH Director prior to Bhattacharya and currently its principal deputy director, shares Bhattacharya’s view that he was right, saying, “I’m not trying to toot my own horn, but if you read the email I sent [about pandemic policy], everything I said actually has come true. It’s shocking how accurate it was.”

Yet he also proudly proclaimed, “I knew I wasn’t getting vaccinated, and my wife wasn’t, kids weren’t. Knowing what I do about RNA viruses, this is never going to work. It’s not a strategy for this kind [of virus].” And yet the benefits of COVID vaccinations for preventing serious illness have been found in study after study—it is, ironically, science that has been reproduced.

A critical aspect of the original scientific revolution was the recognition that people have to deal with facts that are incompatible with their prior beliefs. It’s probably not a great idea to have a second scientific revolution led by people who appear to be struggling with a key feature of the first.

Political or not?

Anger over Biden-era policies makes Bhattacharya and his allies natural partners of the Trump administration and is almost certainly the reason these people were placed in charge of the NIH. But it also puts them in an odd position with reality, since they have to defend policies that clearly damage science. “You hear, ‘Oh well this project’s been cut, this funding’s been cut,’” Bhattacharya said. “Well, there hasn’t been funding cut.”

A few days after Bhattacharya made this statement, Senator Bernie Sanders released data showing that many areas of research have indeed seen funding cuts.

Image of a graph with a series of colored lines, each of which shows a sharp decline at the end.

Bhattacharya’s claims that no funding had been cut appears to be at odds with the data.

Bhattacharya’s claims that no funding had been cut appears to be at odds with the data. Credit: Office of Bernard Sanders

Bhattacharya also acknowledged that the US suffers from large health disparities between different racial groups. Yet grants funding studies of those disparities were cut during DOGE’s purge of projects it labeled as “DEI.” Bhattacharya was happy to view that funding as being ideologically motivated. But as lawsuits have revealed, nobody at the NIH ever evaluated whether that was the case; Matthew Memoli, one of the other speakers, simply forwarded on the list of grants identified by DOGE with instructions that they be canceled.

Bhattacharya also did his best to portray the NIH staff as being enthused about the changes he’s making, presenting the staff as being liberated from a formerly oppressive leadership. “The staff there, they worked for many decades under a pretty tight regime,” he told the audience. “They were controlled, and now we were trying to empower them to come to us with their ideas.”

But he is well aware of the dissatisfaction expressed by NIH workers in the Bethesda Declaration (he met with them, after all), as well as the fact that one of the leaders of that effort has since filed for whistleblower protection after being placed on administrative leave due to her advocacy.

Bhattacharya effectively denied both that people had suffered real-world consequences in their jobs and funding and that the decision to sideline them was political. Yet he repeatedly implied that he and his allies suffered due to political decisions because… people left him off some email chains.

“No one was interested in my opinion about anything,” he told the audience. “You weren’t on the emails anymore.”

And he implied this sort of “suppression” was widespread. “I’ve seen Matt [Memoli] poke his head up and say that he was against the COVID vaccine mandates—in the old NIH, that was an act of courage,” Battacharya said. “I recognized it as an act of courage because you weren’t allowed to contradict the leader for fear that you were going to get suppressed.” As he acknowledged, though, Memoli suffered no consequences for contradicting “the leader.”

Bhattacharya and his allies continue to argue that it’s a serious problem that they suffered no consequences for voicing ideas they believe were politically disfavored; yet they are perfectly comfortable with people suffering real consequences due to politics. Again, it’s not clear how this sort of intellectual incoherence can rally scientists around any cause, much less a revolution.

Does it matter?

Given that politics has left Bhattacharya in charge of the largest scientific funding agency on the planet, it may not matter how the scientific community views his project. And it’s those politics that are likely at the center of Bhattacharya’s decision to give the MAHA Institute an entire afternoon of his time. It’s founded specifically to advance the aims of his boss, Secretary of Health Robert F. Kennedy Jr., and represents a group that has become an important component of Trump’s coalition. As such, they represent a constituency that can provide critical political support for what Bhattacharya hopes to accomplish.

Close-up of sterile single-use syringes individually wrapped in plastic and arranged in a metal tray, each containing a dose of COVID-19 vaccine.

Vaccine mandates played a big role in motivating the present leadership of the NIH.

Vaccine mandates played a big role in motivating the present leadership of the NIH. Credit: JEAN-FRANCOIS FORT

Unfortunately, they’re also very keen on profoundly unscientific ideas, such as the idea that ivermectin might treat cancer or that vaccines aren’t thoroughly tested. The speakers did their best not to say anything that might offend their hosts, in one example spending several minutes to gently tell a moderator why there’s no plausible reason to think ivermectin would treat cancer. They also made some supportive gestures where possible. Despite the continued flow of misinformation from his boss, Bhattacharya said, “It’s been really great to be part of administration to work for Secretary Kennedy for instance, whose only focus is to make America healthy.”

He also made the point of naming “vaccine injury” as a medical concern he suggested was often ignored by the scientific community, lumping it in with chronic Lyme disease and long COVID. Several of the speakers noted positive aspects of vaccines, such as their ability to prevent cancers or protect against dementia. Oddly, though, none of these mentions included the fact that vaccines are highly effective at blocking or limiting the impact of the pathogens they’re designed to protect against.

When pressed on some of MAHA’s odder ideas, NIH leadership responded with accurate statements on topics such as plausible biological mechanisms and the timing of disease progression. But the mere fact that they had to answer these questions highlights the challenges NIH leadership faces: Their primary political backing comes from people who have limited respect for the scientific process. Pandering to them, though, will ultimately undercut any support they might achieve from the scientific community.

Managing that tension while starting a scientific revolution would be challenging on its own. But as the day’s talks made clear, the challenges are likely to be compounded by the lack of intellectual coherence behind the whole project. As much as it would be good to see the scientific community place greater value on reproducibility, these aren’t the right guys to make that happen.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

NIH head, still angry about COVID, wants a second scientific revolution Read More »

under-trump,-epa’s-enforcement-of-environmental-laws-collapses,-report-finds

Under Trump, EPA’s enforcement of environmental laws collapses, report finds


The Environmental Protection Agency has drastically pulled back on holding polluters accountable.

Enforcement against polluters in the United States plunged in the first year of President Donald Trump’s second term, a far bigger drop than in the same period of his first term, according to a new report from a watchdog group.

By analyzing a range of federal court and administrative data, the nonprofit Environmental Integrity Project found that civil lawsuits filed by the US Department of Justice in cases referred by the Environmental Protection Agency dropped to just 16 in the first 12 months after Trump’s inauguration on Jan. 20, 2025. That is 76 percent less than in the first year of the Biden administration.

Trump’s first administration filed 86 such cases in its first year, which was in turn a drop from the Obama administration’s 127 four years earlier.

“Our nation’s landmark environmental laws are meaningless when EPA does not enforce the rules,” Jen Duggan, executive director of the Environmental Integrity Project, said in a statement.

The findings echo two recent analyses from the nonprofits Public Employees for Environmental Responsibility and Earthjustice, which both documented dwindling environmental enforcement under Trump.

From day one of Trump’s second term, the administration has pursued an aggressive deregulatory agenda, scaling back regulations and health safeguards across the federal government that protect water, air and other parts of the environment. This push to streamline industry activities has been particularly favorable for fossil fuel companies. Trump declared an “energy emergency” immediately after his inauguration.

At the EPA, Administrator Lee Zeldin launched in March what the administration called the “biggest deregulatory action in U.S. history”: 31 separate efforts to roll back restrictions on air and water pollution; to hand over more authority to states, some of which have a long history of supporting lax enforcement; and to relinquish EPA’s mandate to act on climate change under the Clean Air Act.

The new report suggests the agency is also relaxing enforcement of existing law. Neither the White House nor the EPA responded to a request for comment.

A “compliance first” approach

Part of the decline in lawsuits against polluters could be due to the lack of staff to carry them out, experts say. According to an analysis from E&E News, at least a third of lawyers in the Justice Department’s environment division have left in the past year. Meanwhile, the EPA in 2025 laid off hundreds of employees who monitored pollution that could hurt human health.

Top agency officials are also directing staff to issue fewer violation notices and reduce other enforcement actions. In December, the EPA formalized a new “compliance first” enforcement policy that stresses working with suspected violators to correct problems before launching any formal action that could lead to fines or mandatory correction measures.

“Formal enforcement … is appropriate only when compliance assurance or informal enforcement is inapplicable or insufficient to achieve rapid compliance,” wrote Craig Pritzlaff, who is now a principal deputy assistant EPA administrator, in a Dec. 5 memo to all enforcement officials and regional offices.

Only in rare cases involving an immediate hazard should enforcers use traditional case tools, Pritzlaff said. “Immediate formal enforcement may be required in certain circumstances, such as when there is an emergency that presents significant harm to human health and the environment,” he wrote.

Federal agencies like the EPA, with staffs far outmatched in size compared to the vast sectors of the economy they oversee, typically have used enforcement actions not only to deal with violators but to deter other companies from breaking the law. Environmental advocates worry that without environmental cops visible on the beat, compliance will erode.

Pritzlaff joined the EPA last fall after five years heading up enforcement for the Texas Commission on Environmental Quality, where nonprofit watchdog group Public Citizen noted that he was known as a “reluctant regulator.” Public Citizen and other advocacy groups criticized TCEQ under Pritzlaff’s leadership for its reticence to take decisive action against repeat violators.

One example: An INEOS chemical plant had racked up close to 100 violations over a decade before a 2023 explosion that sent one worker to the hospital, temporarily shut down the Houston Ship Channel and sparked a fire that burned for an hour. Public Citizen said it was told by TCEQ officials that the agency allowed violations to accumulate over the years, arguing it was more efficient to handle multiple issues in a single enforcement action.

“But that proved to be untrue, instead creating a complex backlog of cases that the agency is still struggling to resolve,” Public Citizen wrote last fall after Pritzlaff joined the EPA. “That’s not efficiency, it’s failure.”

Early last year, TCEQ fined INEOS $2.3 million for an extensive list of violations that occurred between 2016 and 2021.

“A slap on the wrist”

The EPA doesn’t always take entities to court when they violate environmental laws. At times, the agency can resolve these issues through less-formal administrative cases, which actually increased during the first eight months of Trump’s second term when compared to the same period in the Biden administration, according to the new report.

However, most of these administrative actions involved violations of requirements for risk management plans under the Clean Air Act or municipalities’ violations of the Safe Drinking Water Act. The Trump administration did not increase administrative cases that involve pollution from industrial operations, Environmental Integrity Project spokesperson Tom Pelton said over email.

Another signal of declining enforcement: Through September of last year, the EPA issued $41 million in penalties—$8 million less than the same period in the first year of the Biden administration, after adjusting for inflation. This suggests “the Trump Administration may be letting more polluters get by with a slap on the wrist when the Administration does take enforcement action,” the report reads.

Combined, the lack of lawsuits, penalties, and other enforcement actions for environmental violations could impact communities across the country, said Erika Kranz, a senior staff attorney in the Environmental and Energy Law Program at Harvard Law School, who was not involved in the report.

“We’ve been seeing the administration deregulate by repealing rules and extending compliance deadlines, and this decline in enforcement action seems like yet another mechanism that the administration is using to de-emphasize environmental and public health protections,” Kranz said. “It all appears to be connected, and if you’re a person in the US who is worried about your health and the health of your neighbors generally, this certainly could have effects.”

The report notes that many court cases last longer than a year, so it will take time to get a clearer sense of how environmental enforcement is changing under the Trump administration. However, the early data compiled by the Environmental Integrity Project and other nonprofits shows a clear and steep shift away from legal actions against polluters.

Historically, administrations have a “lot of leeway on making enforcement decisions,” Kranz said. But this stark of a drop could prompt lawsuits against the Trump administration, she added.

“Given these big changes and trends, you might see groups arguing that this is more than just an exercise of discretion or choosing priorities [and] this is more of an abdication of an agency’s core mission and its statutory duties,” Kranz said. “I think it’s going to be interesting to see if groups make those arguments, and if they do, how courts look at them.”

This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.

Photo of Inside Climate News

Under Trump, EPA’s enforcement of environmental laws collapses, report finds Read More »