Science

ocean-acidification-crosses-“planetary-boundaries”

Ocean acidification crosses “planetary boundaries”

A critical measure of the ocean’s health suggests that the world’s marine systems are in greater peril than scientists had previously realized and that parts of the ocean have already reached dangerous tipping points.

A study, published Monday in the journal Global Change Biology, found that ocean acidification—the process in which the world’s oceans absorb excess carbon dioxide from the atmosphere, becoming more acidic—crossed a “planetary boundary” five years ago.

“A lot of people think it’s not so bad,” said Nina Bednaršek, one of the study’s authors and a senior researcher at Oregon State University. “But what we’re showing is that all of the changes that were projected, and even more so, are already happening—in all corners of the world, from the most pristine to the little corner you care about. We have not changed just one bay, we have changed the whole ocean on a global level.”

The new study, also authored by researchers at the UK’s Plymouth Marine Laboratory and the National Oceanic and Atmospheric Administration (NOAA), finds that by 2020 the world’s oceans were already very close to the “danger zone” for ocean acidity, and in some regions had already crossed into it.

Scientists had determined that ocean acidification enters this danger zone or crosses this planetary boundary when the amount of calcium carbonate—which allows marine organisms to develop shells—is less than 20 percent compared to pre-industrial levels. The new report puts the figure at about 17 percent.

“Ocean acidification isn’t just an environmental crisis, it’s a ticking time bomb for marine ecosystems and coastal economies,” said Steve Widdicombe, director of science at the Plymouth lab, in a press release. “As our seas increase in acidity, we’re witnessing the loss of critical habitats that countless marine species depend on and this, in turn, has major societal and economic implications.”

Scientists have determined that there are nine planetary boundaries that, once breached, risk humans’ abilities to live and thrive. One of these is climate change itself, which scientists have said is already beyond humanity’s “safe operating space” because of the continued emissions of heat-trapping gases. Another is ocean acidification, also caused by burning fossil fuels.

Ocean acidification crosses “planetary boundaries” Read More »

ibm-now-describing-its-first-error-resistant-quantum-compute-system

IBM now describing its first error-resistant quantum compute system


Company is moving past focus on qubits, shifting to functional compute units.

A rendering of what IBM expects will be needed to house a Starling quantum computer. Credit: IBM

On Tuesday, IBM released its plans for building a system that should push quantum computing into entirely new territory: a system that can both perform useful calculations while catching and fixing errors and be utterly impossible to model using classical computing methods. The hardware, which will be called Starling, is expected to be able to perform 100 million operations without error on a collection of 200 logical qubits. And the company expects to have it available for use in 2029.

Perhaps just as significant, IBM is also committing to a detailed description of the intermediate steps to Starling. These include a number of processors that will be configured to host a collection of error-corrected qubits, essentially forming a functional compute unit. This marks a major transition for the company, as it involves moving away from talking about collections of individual hardware qubits and focusing instead on units of functional computational hardware. If all goes well, it should be possible to build Starling by chaining a sufficient number of these compute units together.

“We’re updating [our roadmap] now with a series of deliverables that are very precise,” IBM VP Jay Gambetta told Ars, “because we feel that we’ve now answered basically all the science questions associated with error correction and it’s becoming more of a path towards an engineering problem.”

New architectures

Error correction on quantum hardware involves entangling a group of qubits in a way that distributes one or more quantum bit values among them and includes additional qubits that can be used to check the state of the system. It can be helpful to think of these as data and measurement qubits. Performing weak quantum measurements on the measurement qubits produces what’s called “syndrome data,” which can be interpreted to determine whether anything about the data qubits has changed (indicating an error) and how to correct it.

There are lots of potential ways to arrange different combinations of data and measurement qubits for this to work, each referred to as a code. But, as a general rule, the more hardware qubits committed to the code, the more robust it will be to errors, and the more logical qubits that can be distributed among its hardware qubits.

Some quantum hardware, like that based on trapped ions or neutral atoms, is relatively flexible when it comes to hosting error-correction codes. The hardware qubits can be moved around so that any two can be entangled, so it’s possible to adopt a huge range of configurations, albeit at the cost of the time spent moving atoms around. IBM’s technology is quite different. It relies on qubits made of superconducting electronics laid out on a chip, with entanglement mediated by wiring that runs between qubits. The layout of this wiring is set during the chip’s manufacture, and so the chip’s design commits it to a limited number of potential error-correction codes.

Unfortunately, this wiring can also enable crosstalk between neighboring qubits, causing them to lose their state. To avoid this, existing IBM processors have their qubits wired in what they term a “heavy hex” configuration, named for its hexagonal arrangements of connections among its qubits. This has worked well to keep the error rate of its hardware down, but it also poses a challenge, since IBM has decided to go with an error-correction code that’s incompatible with the heavy hex geometry.

A couple of years back, an IBM team described a compact error correction code called a low-density parity check (LDPC). This requires a square grid of nearest-neighbor connections among its qubits, as well as wiring to connect qubits that are relatively distant on the chip. To get its chips and error-correction scheme in sync, IBM has made two key advances. The first is in its chip packaging, which now uses several layers of wiring sitting above the hardware qubits to enable all of the connections needed for the LDPC code.

We’ll see that first in a processor called Loon that’s on the company’s developmental roadmap. “We’ve already demonstrated these three things: high connectivity, long-range couplers, and couplers that break the plane [of the chip] and connect to other qubits,” Gambetta said. “We have to combine them all as a single demonstration showing that all these parts of packaging can be done, and that’s what I want to achieve with Loon.” Loon will be made public later this year.

Two diagrams of blue objects linked by red lines. The one on the left is sparse and simple, while the one on the right is a complicated mesh of red lines.

On the left, the simple layout of the connections in a current-generation Heron processor. At right, the complicated web of connections that will be present in Loon. Credit: IBM

The second advance IBM has made is to eliminate the crosstalk that the heavy hex geometry was used to minimize, so heavy hex will be going away. “We are releasing this year a bird for near-term experiments that is a square array that has almost zero crosstalk,” Gambetta said, “and that is Nighthawk.” The more densely connected qubits cut the overhead needed to perform calculations by a factor of 15, Gambetta told Ars.

Nighthawk is a 2025 release on a parallel roadmap that you can think of as user-facing. Iterations on its basic design will be released annually through 2028, each enabling more operations without error (going from 5,000 gate operations this year to 15,000 in 2028). Each individual Nighthawk processor will host 120 hardware qubits, but 2026 will see three of them chained together and operating as a unit, providing 360 hardware qubits. That will be followed in 2027 by a machine with nine linked Nighthawk processors, boosting the hardware qubit number over 1,000.

Riding the bicycle

The real future of IBM’s hardware, however, will be happening over on the developmental line of processors, where talk about hardware qubit counts will become increasingly irrelevant. In a technical document released today, IBM is describing the specific LDPC code it will be using, termed a bivariate bicycle code due to some cylindrical symmetries in its details that vaguely resemble bicycle wheels. The details of the connections matter less than the overall picture of what it takes to use this error code in practice.

IBM describes two implementations of this form of LDPC code. In the first, 144 hardware qubits are arranged so that they play host to 12 logical qubits and all of the measurement qubits needed to perform error checks. The standard measure of a code’s ability to catch and correct errors is called its distance, and in this case, the distance is 12. As an alternative, they also describe a code that uses 288 hardware qubits to host the same 12 logical qubits but boost the distance to 18, meaning it’s more resistant to errors. IBM will make one of these collections of logical qubits available as a Kookaburra processor in 2026, which will use them to enable stable quantum memory.

The follow-on will bundle these with a handful of additional qubits that can produce quantum states that are needed for some operations. Those, plus hardware needed for the quantum memory, form a single, functional computation unit, built on a single chip, that is capable of performing all the operations needed to implement any quantum algorithm.

That will appear with the Cockatoo chip, which will also enable multiple processing units to be linked on a single bus, allowing the logical qubit count to grow beyond 12. (The company says that one of the dozen logical qubits in each unit will be used to mediate entanglement with other units and so won’t be available for computation.) That will be followed by the first test versions of Starling, which will allow universal computations on a limited number of logical qubits spread across multiple chips.

Separately, IBM is releasing a document that describes a key component of the system that will run on classical computing hardware. Full error correction requires evaluating the syndrome data derived from the state of all the measurement qubits in order to determine the state of the logical qubits and whether any corrections need to be made. As the complexity of the logical qubits grows, the computational burden of evaluating grows with it. If this evaluation can’t be executed in real time, then it becomes impossible to perform error-corrected calculations.

To address this, IBM has developed a message-passing decoder that can perform parallel evaluations of the syndrome data. The system explores more of the solution space by a combination of randomizing the weight given to the memory of past solutions and by handing any seemingly non-optimal solutions on to new instances for additional evaluation. The key thing is that IBM estimates that this can be run in real time using FPGAs, ensuring that the system works.

A quantum architecture

There are a lot more details beyond those, as well. Gambetta described the linkage between each computational unit—IBM is calling it a Universal Bridge—which requires one microwave cable for each code distance of the logical qubits being linked. (In other words, a distance 12 code would need 12 microwave-carrying cables to connect each chip.) He also said that IBM is developing control hardware that can operate inside the refrigeration hardware, based on what they’re calling “cold CMOS,” which is capable of functioning at 4 Kelvin.

The company is also releasing renderings of what it expects Starling to look like: a series of dilution refrigerators, all connected by a single pipe that contains the Universal Bridge. “It’s an architecture now,” Gambetta said. “I have never put details in the roadmap that I didn’t feel we could hit, and now we’re putting a lot more details.”

The striking thing to me about this is that it marks a shift away from a focus on individual qubits, their connectivity, and their error rates. The error hardware rates are now good enough (4 x 10-4) for this to work, although Gambetta felt that a few more improvements should be expected. And connectivity will now be directed exclusively toward creating a functional computational unit.

That said, there’s still a lot of space beyond Starling on IBM’s roadmap. The 200 logical qubits it promises will be enough to handle some problems, but not enough to perform the complex algorithms needed to do things like break encryption. That will need to wait for something closer to Blue Jay, a 2033 system that IBM expects will have 2,000 logical qubits. And, as of right now, it’s the only thing listed beyond Starling.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

IBM now describing its first error-resistant quantum compute system Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

simulations-find-ghostly-whirls-of-dark-matter-trailing-galaxy-arms

Simulations find ghostly whirls of dark matter trailing galaxy arms

“Basically what you do is you set up a bunch of particles that represent things like stars, gas, and dark matter, and you let them evolve for millions of years,” Bernet says. “Human lives are much too short to witness this happening in real time. We need simulations to help us see more than the present, which is like a single snapshot of the Universe.”

Several other groups already had galaxy simulations they were using to do other science, so the team asked one to see their data. When they found the dark matter imprint they were looking for, they checked for it in another group’s simulation. They found it again, and then in a third simulation as well.

The dark matter spirals are much less pronounced than their stellar counterparts, but the team noted a distinct imprint on the motions of dark matter particles in the simulations. The dark spiral arms lag behind the stellar arms, forming a sort of unseen shadow.

These findings add a new layer of complexity to our understanding of how galaxies evolve, suggesting that dark matter is more than a passive, invisible scaffolding holding galaxies together. Instead, it appears to react to the gravity from stars in galaxies’ spiral arms in a way that may even influence star formation or galactic rotation over cosmic timescales. It could also explain the relatively newfound excess mass along a nearby spiral arm in the Milky Way.

The fact that they saw the same effect in differently structured simulations suggests that these dark matter spirals may be common in galaxies like the Milky Way. But tracking them down in the real Universe may be tricky.

Bernet says scientists could measure dark matter in the Milky Way’s disk. “We can currently measure the density of dark matter close to us with a huge precision,” he says. “If we can extend these measurements to the entire disk with enough precision, spiral patterns should emerge if they exist.”

“I think these results are very important because it changes our expectations for where to search for dark matter signals in galaxies,” Brooks says. “I could imagine that this result might influence our expectation for how dense dark matter is near the solar neighborhood and could influence expectations for lab experiments that are trying to directly detect dark matter.” That’s a goal scientists have been chasing for nearly 100 years.

Ashley writes about space for a contractor for NASA’s Goddard Space Flight Center by day and freelances in her free time. She holds master’s degrees in space studies from the University of North Dakota and science writing from Johns Hopkins University. She writes most of her articles with a baby on her lap.

Simulations find ghostly whirls of dark matter trailing galaxy arms Read More »

a-japanese-lander-crashed-on-the-moon-after-losing-track-of-its-location

A Japanese lander crashed on the Moon after losing track of its location


“It’s not impossible, so how do we overcome our hurdles?”

Takeshi Hakamada, founder and CEO of ispace, attends a press conference in Tokyo on June 6, 2025, to announce the outcome of his company’s second lunar landing attempt. Credit: Kazuhiro Nogi/AFP via Getty Images

A robotic lander developed by a Japanese company named ispace plummeted to the Moon’s surface Thursday, destroying a small rover and several experiments intended to demonstrate how future missions could mine and harvest lunar resources.

Ground teams at ispace’s mission control center in Tokyo lost contact with the Resilience lunar lander moments before it was supposed to touch down in a region called Mare Frigoris, or the Sea of Cold, a basaltic plain in the Moon’s northern hemisphere.

A few hours later, ispace officials confirmed what many observers suspected. The mission was lost. It’s the second time ispace has failed to land on the Moon in as many tries.

“We wanted to make Mission 2 a success, but unfortunately we haven’t been able to land,” said Takeshi Hakamada, the company’s founder and CEO.

Ryo Ujiie, ispace’s chief technology officer, said the final data received from the Resilience lander—assuming it was correct—showed it at an altitude of approximately 630 feet (192 meters) and descending too fast for a safe landing. “The deceleration was not enough. That was a fact,” Ujiie told reporters in a press conference. “We failed to land, and we have to analyze the reasons.”

The company said in a press release that a laser rangefinder used to measure the lander’s altitude “experienced delays in obtaining valid measurement values.” The downward-facing laser fires light pulses toward the Moon during descent, and clocks the time it takes to receive a reflection. This time delay at light speed tells the lander’s guidance system how far it is above the lunar surface. But something went wrong in the altitude measurement system on Thursday.

“As a result, the lander was unable to decelerate sufficiently to reach the required speed for the planned lunar landing,” ispace said. “Based on these circumstances, it is currently assumed that the lander likely performed a hard landing on the lunar surface.”

Controllers sent a command to reboot the lander in hopes of reestablishing communication, but the Resilience spacecraft remained silent.

“Given that there is currently no prospect of a successful lunar landing, our top priority is to swiftly analyze the telemetry data we have obtained thus far and work diligently to identify the cause,” Hakamada said in a statement. “We will strive to restore trust by providing a report of the findings to our shareholders, payload customers, Hakuto-R partners, government officials, and all supporters of ispace.”

Overcoming obstacles

The Hakuto name harkens back to ispace’s origin in 2010 as a contender for the Google Lunar X-Prize, a sweepstakes that offered a $20 million grand prize to the first privately funded team to put a lander on the Moon. Hakamada’s group was called Hakuto, which means “white rabbit” in Japanese. The prize shut down in 2018 without a winner, leading some of the teams to dissolve or find new purpose. Hakamada stayed the course, raised more funding, and rebooted the program under the name Hakuto-R.

It’s a story of resilience, hence the name of ispace’s second lunar lander. The mission made it closer to the Moon than the ispace’s first landing attempt in 2023, but Thursday’s failure is a blow to Hakamada’s project.

“As a fact, we tried twice and we haven’t been able to land on the Moon,” Hakamada said through an interpreter. “So we have to say it’s hard to land on the Moon, technically. We know it’s not easy. It’s not something that everyone can do. We know it’s hard, but the important point is it’s not impossible. The US private companies have succeeded in landing, and also JAXA in Japan has succeeded in landing, so it’s not impossible. So how do we overcome our hurdles?”

The Resilience lander and Tenacious rover, seen mounted near the top of the spacecraft, inside a test facility at the Tsukuba Space Center in Tsukuba, Ibaraki Prefecture, on Thursday, Sept. 12, 2024. Credit: Toru Hanai/Bloomberg via Getty Images

In April 2023, ispace’s first lander crashed on the Moon due to a similar altitude measurement problem. The spacecraft thought it was on the surface of the Moon, but was actually firing its engine to hover at an altitude of 3 miles (5 kilometers). The spacecraft ran out of fuel and went into a free fall before impacting the Moon.

Engineers blamed software as the most likely reason for the altitude-measurement problem. During descent, ispace’s lander passed over a 10,000-foot-tall (3,000-meter) cliff, and the spacecraft’s computer interpreted the sudden altitude change as erroneous.

Ujiie, who leads ispace’s technical teams, said the failure mode Thursday was “similar” to that of the first mission two years ago. But at least in ispace’s preliminary data reviews, engineers saw different behavior from the Resilience lander, which flew with a new type of laser rangefinder after ispace’s previous supplier stopped producing the device.

“From Mission 1 to Mission 2, we improved the software,” Ujiie said. “Also, we improved how to approach the landing site… We see different phenomena from Mission 1, so we have to do more analysis to give you any concrete answers.”

If ispace landed smoothly on Thursday, the Resilience spacecraft would have deployed a small rover developed by ispace’s European subsidiary. The rover was partially funded by the Luxembourg Space Agency with support from the European Space Agency. It carried a shovel to scoop up a small amount of lunar soil and a camera to take a photo of the sample. NASA had a contract with ispace to purchase the lunar soil in a symbolic proof of concept to show how the government might acquire material from commercial mining companies in the future.

The lander also carried a water electrolyzer experiment to demonstrate technologies that could split water molecules into hydrogen and oxygen, critical resources for a future Moon base. Other payloads aboard the Resilience spacecraft included cameras, a food production experiment, a radiation monitor, and a Swedish art project called “MoonHouse.”

The spacecraft chassis used for ispace’s first two landing attempts was about the size of a compact car, with a mass of about 1 metric ton (2,200 pounds) when fully fueled. The company’s third landing attempt is scheduled for 2027 with a larger lander. Next time, ispace will fly to the Moon in partnership between the company’s US subsidiary and Draper Laboratory, which has a contract with NASA to deliver experiments to the lunar surface.

Track record

The Resilience lander launched in January on top of a SpaceX Falcon 9 rocket, riding to space in tandem with a commercial Moon lander named Blue Ghost from Firefly Aerospace. Firefly’s lander took a more direct journey to the Moon and achieved a soft landing on March 2. Blue Ghost operated on the lunar surface for two weeks and completed all of its objectives.

The trajectory of ispace’s lander was slower, following a lower-energy, more fuel-efficient path to the Moon before entering lunar orbit last month. Once in orbit, the lander made a few more course corrections to line up with its landing site, then commenced its final descent on Thursday.

Thursday’s landing attempt was the seventh time a privately developed Moon lander tried to conduct a controlled touchdown on the lunar surface.

Two Texas-based companies have had the most success. One of them, Houston-based Intuitive Machines, landed its Odysseus spacecraft on the Moon in February 2024, marking the first time a commercial lander reached the lunar surface intact. But the lander tipped over after touchdown, cutting its mission short after achieving some limited objectives. A second Intuitive Machines lander reached the Moon in one piece in March of this year, but it also fell over and didn’t last as long as the company’s first mission.

Firefly’s Blue Ghost operated for two weeks after reaching the lunar surface, accomplishing all of its objectives and becoming the first fully successful privately owned spacecraft to land and operate on the Moon.

Intuitive Machines, Firefly, and a third company—Astrobotic Technology—have launched their lunar missions under contract with a NASA program aimed at fostering a commercial marketplace for transportation to the Moon. Astrobotic’s first lander failed soon after its departure from Earth. The first two missions launched by ispace were almost fully private ventures, with limited participation from the Japanese space agency, Luxembourg, and NASA.

The Earth looms over the Moon’s horizon in this image from lunar orbit captured on May 27, 2025, by ispace’s Resilience lander. Credit: ispace

Commercial travel to the Moon only began in 2019, so there’s not much of a track record to judge the industry’s prospects. When NASA started signing contracts for commercial lunar missions, the then-chief of the agency’s science vision, Thomas Zurbuchen, estimated the initial landing attempts would have a 50-50 chance of success. On the whole, NASA’s experience with Intuitive Machines, Firefly, and Astrobotic isn’t too far off from Zurbuchen’s estimate, with one full success and a couple of partial successes.

The commercial track record worsens if you include private missions from ispace and Israel’s Beresheet lander.

But ispace and Hakamada haven’t given up on the dream. The company’s third mission will launch under the umbrella of the same NASA program that contracted with Intuitive Machines, Firefly, and Astrobotic. Hakamada cited the achievements of Firefly and Intuitive Machines as evidence that the commercial model for lunar missions is a valid one.

“The ones that have the landers, there are two companies I mentioned. Also, Blue Origin maybe coming up. Also, ispace is a possibility,” Hakamada said. “So, very few companies. We would like to catch up as soon as possible.”

It’s too early to know how the failure on Thursday might impact ispace’s next mission with Draper and NASA.

“I have to admit that we are behind,” said Jumpei Nozaki, director and chief financial officer at ispace. “But we do not really think we are behind from the leading group yet. It’s too early to decide that. The players in the world that can send landers to the Moon are very few, so we still have some competitive edge.”

“Honestly, there were some times I almost cried, but I need to lead this company, and I need to have a strong will to move forward, so it’s not time for me to cry,” Hakamada said.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A Japanese lander crashed on the Moon after losing track of its location Read More »

cambridge-mapping-project-solves-a-medieval-murder

Cambridge mapping project solves a medieval murder


“A tale of shakedowns, sex, and vengeance that expose[s] tensions between the church and England’s elite.”

Location of the murder of John Forde, taken from the Medieval Murder Maps. Credit: Medieval Murder Maps. University of Cambridge: Institute of Criminology

In 2019, we told you about a new interactive digital “murder map” of London compiled by University of Cambridge criminologist Manuel Eisner. Drawing on data catalogued in the city coroners’ rolls, the map showed the approximate location of 142 homicide cases in late medieval London. The Medieval Murder Maps project has since expanded to include maps of York and Oxford homicides, as well as podcast episodes focusing on individual cases.

It’s easy to lose oneself down the rabbit hole of medieval murder for hours, filtering the killings by year, choice of weapon, and location. Think of it as a kind of 14th-century version of Clue: It was the noblewoman’s hired assassins armed with daggers in the streets of Cheapside near St. Paul’s Cathedral. And that’s just the juiciest of the various cases described in a new paper published in the journal Criminal Law Forum.

The noblewoman was Ela Fitzpayne, wife of a knight named Sir Robert Fitzpayne, lord of Stogursey. The victim was a priest and her erstwhile lover, John Forde, who was stabbed to death in the streets of Cheapside on May 3, 1337. “We are looking at a murder commissioned by a leading figure of the English aristocracy,” said University of Cambridge criminologist Manuel Eisner, who heads the Medieval Murder Maps project. “It is planned and cold-blooded, with a family member and close associates carrying it out, all of which suggests a revenge motive.”

Members of the mapping project geocoded all the cases after determining approximate locations for the crime scenes. Written in Latin, the coroners’ rolls are records of sudden or suspicious deaths as investigated by a jury of local men, called together by the coroner to establish facts and reach a verdict. Those records contain such relevant information as where the body was found and by whom; the nature of the wounds; the jury’s verdict on cause of death; the weapon used and how much it was worth; the time, location, and witness accounts; whether the perpetrator was arrested, escaped, or sought sanctuary; and any legal measures taken.

A brazen killing

The murder of Forde was one of several premeditated revenge killings recorded in the area of Westcheap. Forde was walking on the street when another priest, Hascup Neville, caught up to him, ostensibly for a casual chat, just after Vespers but before sunset. As they approached Foster Lane, Neville’s four co-conspirators attacked: Ela Fitzpayne’s brother, Hugh Lovell; two of her former servants, Hugh of Colne and John Strong; and a man called John of Tindale. One of them cut Ford’s throat with a 12-inch dagger, while two others stabbed him in the stomach with long fighting knives.

At the inquest, the jury identified the assassins, but that didn’t result in justice. “Despite naming the killers and clear knowledge of the instigator, when it comes to pursuing the perpetrators, the jury turn a blind eye,” said Eisner. “A household of the highest nobility, and apparently no one knows where they are to bring them to trial. They claim Ela’s brother has no belongings to confiscate. All implausible. This was typical of the class-based justice of the day.”

Colne, the former servant, was eventually charged and imprisoned for the crime some five years later in 1342, but the other perpetrators essentially got away with it.

Eisner et al. uncovered additional historical records that shed more light on the complicated history and ensuing feud between the Fitzpaynes and Forde. One was an indictment in the Calendar of Patent Rolls of Edward III, detailing how Ela and her husband, Forde, and several other accomplices raided a Benedictine priory in 1321. Among other crimes, the intruders “broke [the prior’s] houses, chests and gates, took away a horse, a colt and a boar… felled his trees, dug in his quarry, and carried away the stone and trees.” The gang also stole 18 oxen, 30 pigs, and about 200 sheep and lambs.

There were also letters that the Archbishop of Canterbury wrote to the Bishop of Winchester. Translations of the letters are published for the first time on the project’s website. The archbishop called out Ela by name for her many sins, including adultery “with knights and others, single and married, and even with clerics and holy orders,” and devised a punishment. This included not wearing any gold, pearls, or precious stones and giving money to the poor and to monasteries, plus a dash of public humiliation. Ela was ordered to perform a “walk of shame”—a tamer version than Cersei’s walk in Game of Thrones—every fall for seven years, carrying a four-pound wax candle to the altar of Salisbury Cathedral.

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls (

The London Archives. Inquest number 15 on 1336-7 City of London Coroner’s Rolls. Credit: The London Archives

Ela outright refused to do any of that, instead flaunting “her usual insolence.” Naturally, the archbishop had no choice but to excommunicate her. But Eisner speculates that this may have festered within Ela over the ensuing years, thereby sparking her desire for vengeance on Forde—who may have confessed to his affair with Ela to avoid being prosecuted for the 1321 raid. The archbishop died in 1333, four years before Forde’s murder, so Ela was clearly a formidable person with the patience and discipline to serve her revenge dish cold. Her marriage to Robert (her second husband) endured despite her seemingly constant infidelity, and she inherited his property when he died in 1354.

“Attempts to publicly humiliate Ela Fitzpayne may have been part of a political game, as the church used morality to stamp its authority on the nobility, with John Forde caught between masters,” said Eisner. “Taken together, these records suggest a tale of shakedowns, sex, and vengeance that expose tensions between the church and England’s elites, culminating in a mafia-style assassination of a fallen man of god by a gang of medieval hitmen.”

I, for one, am here for the Netflix true crime documentary on Ela Fitzpayne, “a woman in 14th century England who raided priories, openly defied the Archbishop of Canterbury, and planned the assassination of a priest,” per Eisner.

The role of public spaces

The ultimate objective of the Medieval Murder Maps project is to learn more about how public spaces shaped urban violence historically, the authors said. There were some interesting initial revelations back in 2019. For instance, the murders usually occurred in public streets or squares, and Eisner identified a couple of “hot spots” with higher concentrations than other parts of London. One was that particular stretch of Cheapside running from St Mary-le-Bow church to St. Paul’s Cathedral, where John Forde met his grisly end. The other was a triangular area spanning Gracechurch, Lombard, and Cornhill, radiating out from Leadenhall Market.

The perpetrators were mostly men (in only four cases were women the only suspects). As for weapons, knives and swords of varying types were the ones most frequently used, accounting for 68 percent of all the murders. The greatest risk of violent death in London was on weekends (especially Sundays), between early evening and the first few hours after curfew.

Eisner et al. have now extended their spatial analysis to include homicides committed in York and London in the 14th century with similar conclusions. Murders most often took place in markets, squares, and thoroughfares—all key nodes of medieval urban life—in the evenings or on weekends. Oxford had significantly higher murder rates than York or London and also more organized group violence, “suggestive of high levels of social disorganization and impunity.” London, meanwhile, showed distinct clusters of homicides, “which reflect differences in economic and social functions,” the authors wrote. “In all three cities, some homicides were committed in spaces of high visibility and symbolic significance.”

Criminal Law Forum, 2025. DOI: 10.1007/s10609-025-09512-7  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Cambridge mapping project solves a medieval murder Read More »

startup-puts-a-logical-qubit-in-a-single-piece-of-hardware

Startup puts a logical qubit in a single piece of hardware

A bit over a year ago, Nord Quantique used a similar setup to show that it could be used to identify the most common form of error in these devices, one in which the system loses one of its photons. “We can store multiple microwave photons into each of these cavities, and the fact that we have redundancy in the system comes exactly from this,” said Nord Quantique’s CTO, Julien Camirand Lemyre. However, this system was unable to handle many of the less common errors that might also occur.

This time around, the company is showing that it can get an actual logical qubit into a variant of the same hardware. In the earlier version of its equipment, the resonator cavity had a single post and supported a single frequency. In the newer iteration, there were two posts and two frequencies. Each of those frequencies creates its own quantum resonator in the same cavity, with its own set of modes. “It’s this ensemble of photons inside this cavity that creates the logical qubit,” Lemyre told Ars.

The additional quantum information that can now be stored in the system enables it to identify more complex errors than the loss of a photon.

Catching, but not fixing errors

The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error.

The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors—something the team didn’t try—would be able to fix all the detected problems.

Startup puts a logical qubit in a single piece of hardware Read More »

what-solar?-what-wind?-texas-data-centers-build-their-own-gas-power-plants

What solar? What wind? Texas data centers build their own gas power plants


Data center operators are turning away from the grid to build their own power plants.

Sisters Abigail and Jennifer Lindsey stand on their rural property on May 27 outside New Braunfels, Texas, where they posted a sign in opposition to a large data center and power plant planned across the street. Credit: Dylan Baddour/Inside Climate News

NEW BRAUNFELS, Texas—Abigail Lindsey worries the days of peace and quiet might be nearing an end at the rural, wooded property where she lives with her son. On the old ranch across the street, developers want to build an expansive complex of supercomputers for artificial intelligence, plus a large, private power plant to run it.

The plant would be big enough to power a major city, with 1,200 megawatts of planned generation capacity fueled by West Texas shale gas. It will only supply the new data center, and possibly other large data centers recently proposed, down the road.

“It just sucks,” Lindsey said, sitting on her deck in the shade of tall oak trees, outside the city of New Braunfels. “They’ve come in and will completely destroy our way of life: dark skies, quiet and peaceful.”

The project is one of many others like it proposed in Texas, where a frantic race to boot up energy-hungry data centers has led many developers to plan their own gas-fired power plants rather than wait for connection to the state’s public grid. Egged on by supportive government policies, this buildout promises to lock in strong gas demand for a generation to come.

The data center and power plant planned across from Lindsey’s home is a partnership between an AI startup called CloudBurst and the natural gas pipeline giant Energy Transfer. It was Energy Transfer’s first-ever contract to supply gas for a data center, but it is unlikely to be its last. In a press release, the company said it was “in discussions with a number of data center developers and expects this to be the first of many agreements.”

Previously, conventional wisdom assumed that this new generation of digital infrastructure would be powered by emissions-free energy sources like wind, solar and battery power, which have lately seen explosive growth. So far, that vision isn’t panning out, as desires to build quickly overcome concerns about sustainability.

“There is such a shortage of data center capacity and power,” said Kent Draper, chief commercial officer at Australian data center developer IREN, which has projects in West Texas. “Even the large hyperscalers are willing to turn a blind eye to their renewable goals for some period of time in order to get access.”

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas.

Credit: Dylan Baddour/Inside Climate News

The Hays Energy Project is a 990 MW gas-fired power plant near San Marcos, Texas. Credit: Dylan Baddour/Inside Climate News

IREN prioritizes renewable energy for its data centers—giant warehouses full of advanced computers and high-powered cooling systems that can be configured to produce crypto currency or generate artificial intelligence. In Texas, that’s only possible because the company began work here years ago, early enough to secure a timely connection to the state’s grid, Draper said.

There were more than 2,000 active generation interconnection requests as of April 30, totalling 411,600 MW of capacity, according to grid operator ERCOT. A bill awaiting signature on Gov. Greg Abbott’s desk, S.B. 6, looks to filter out unserious large-load projects bloating the queue by imposing a $100,000 fee for interconnection studies.

Wind and solar farms require vast acreage and generate energy intermittently, so they work best as part of a diversified electrical grid that collectively provides power day and night. But as the AI gold rush gathered momentum, a surge of new project proposals has created years-long wait times to connect to the grid, prompting many developers to bypass it and build their own power supply.

Operating alone, a wind or solar farm can’t run a data center. Battery technologies still can’t store such large amounts of energy for the length of time required to provide steady, uninterrupted power for 24 hours per day, as data centers require. Small nuclear reactors have been touted as a means to meet data center demand, but the first new units remain a decade from commercial deployment, while the AI boom is here today.

Now, Draper said, gas companies approach IREN all the time, offering to quickly provide additional power generation.

Gas provides almost half of all power generation capacity in Texas, far more than any other source. But the amount of gas power in Texas has remained flat for 20 years, while wind and solar have grown sharply, according to records from the US Energy Information Administration. Facing a tidal wave of proposed AI projects, state lawmakers have taken steps to try to slow the expansion of renewable energy and position gas as the predominant supply for a new era of demand.

This buildout promises strong demand and high gas prices for a generation to come, a boon to Texas’ fossil fuel industry, the largest in the nation. It also means more air pollution and emissions of planet-warming greenhouse gases, even as the world continues to barrel past temperature records.

Texas, with 9 percent of the US population, accounted for about 15 percent of current gas-powered generation capacity in the country but 26 percent of planned future generation at the end of 2024, according to data from Global Energy Monitor. Both the current and planned shares are far more than any other state.

GEM identified 42 new gas turbine projects under construction, in development, or announced in Texas before the start of this year. None of those projects are sited at data centers. However, other projects announced since then, like CloudBurst and Energy Transfer outside New Braunfels, will include dedicated gas power plants on site at data centers.

For gas companies, the boom in artificial intelligence has quickly become an unexpected gold mine. US gas production has risen steadily over 20 years since the fracking boom began, but gas prices have tumbled since 2024, dragged down by surging supply and weak demand.

“The sudden emergence of data center demand further brightens the outlook for the renaissance in gas pricing,” said a 2025 oil and gas outlook report by East Daley Analytics, a Colorado-based energy intelligence firm. “The obvious benefit to producers is increased drilling opportunities.”

It forecast up to a 20 percent increase in US gas production by 2030, driven primarily by a growing gas export sector on the Gulf Coast. Several large export projects will finish construction in the coming years, with demand for up to 12 billion cubic feet of gas per day, the report said, while new power generation for data centers would account for 7 billion cubic feet per day of additional demand. That means profits for power providers, but also higher costs for consumers.

Natural gas, a mixture primarily composed of methane, burns much cleaner than coal but still creates air pollution, including soot, some hazardous chemicals, and greenhouse gases. Unburned methane released into the atmosphere has more than 80 times the near-term warming effect of carbon dioxide, leading some studies to conclude that ubiquitous leaks in gas supply infrastructure make it as impactful as coal to the global climate.

Credit: Dylan Baddour/Inside Climate News

It’s a power source that’s heralded for its ability to get online fast, said Ed Hirs, an energy economics lecturer at the University of Houston. But the years-long wait times for turbines have quickly become the industry’s largest constraint in an otherwise positive outlook.

“If you’re looking at a five-year lead time, that’s not going to help Alexa or Siri today,” Hirs said.

The reliance on gas power for data centers is a departure from previous thought, said Larry Fink, founder of global investment firm BlackRock, speaking to a crowd of industry executives at an oil and gas conference in Houston in March.

About four years ago, if someone said they were building a data center, they said it must be powered by renewables, he recounted. Two years ago, it was a preference.

“Today?” Fink said. “They care about power.”

Gas plants for data centers

Since the start of this year, developers have announced a flurry of gas power deals for data centers. In the small city of Abilene, the builders of Stargate, one of the world’s largest data center projects, applied for permits in January to build 360 MW of gas power generation, authorized to emit 1.6 million tons of greenhouse gases and 14 tons of hazardous air pollutants per year. Later, the company announced the acquisition of an additional 4,500 MW of gas power generation capacity.

Also in January, a startup called Sailfish announced ambitious plans for a 2,600-acre, 5,000 MW cluster of data centers in the tiny North Texas town of Tolar, population 940.

“Traditional grid interconnections simply can’t keep pace with hyperscalers’ power demands, especially as AI accelerates energy requirements,” Sailfish founder Ryan Hughes told the website Data Center Dynamics at the time. “Our on-site natural gas power islands will let customers scale quickly.”

CloudBurst and Energy Transfer announced their data center and power plant outside New Braunfels in February, and another company partnership also announced plans for a 250 MW gas plant and data center near Odessa in West Texas. In May, a developer called Tract announced a 1,500-acre, 2,000 MW data center campus with some on-site generation and some purchased gas power near the small Central Texas town of Lockhart.

Not all new data centers need gas plants. A 120 MW South Texas data center project announced in April would use entirely wind power, while an enormous, 5,000 MW megaproject outside Laredo announced in March hopes to eventually run entirely on private wind, solar, and hydrogen power (though it will use gas at first). Another collection of six data centers planned in North Texas hopes to draw 1,400 MW from the grid.

Altogether, Texas’ grid operator predicts statewide power demand will nearly double within five years, driven largely by data centers for artificial intelligence. It mirrors a similar situation unfolding across the country, according to analysis by S&P Global.

“There is huge concern about the carbon footprint of this stuff,” said Dan Stanzione, executive director of the Texas Advanced Computing Center at the University of Texas at Austin. “If we could decarbonize the power grid, then there is no carbon footprint for this.”

However, despite massive recent expansions of renewable power generation, the boom in artificial intelligence appears to be moving the country farther from, not closer to, its decarbonization goals.

Restrictions on renewable energy

Looking forward to a buildout of power supply, state lawmakers have proposed or passed new rules to support the deployment of more gas generation and slow the surging expansion of wind and solar power projects. Supporters of these bills say they aim to utilize Texas’ position as the nation’s top gas producer.

Some energy experts say the rules proposed throughout the legislative session could dismantle the state’s leadership in renewables as well as the state’s ability to provide cheap and reliable power.

“It absolutely would [slow] if not completely stop renewable energy,” said Doug Lewin, a Texas energy consultant, about one of the proposed rules in March. “That would really be extremely harmful to the Texas economy.”

While the bills deemed as “industry killers” for renewables missed key deadlines, failing to reach Abbott’s desk, they illustrate some lawmakers’ aspirations for the state’s energy industry.

One failed bill, S.B. 388, would have required every watt of new solar brought online to be accompanied by a watt of new gas. Another set of twin bills, H.B. 3356 and S.B. 715, would have forced existing wind and solar companies to buy fossil-fuel based power or connect to a battery storage resource to cover the hours the energy plants are not operating.

When the Legislature last met in 2023, it created a $5 billion public “energy fund” to finance new gas plants but not wind or solar farms. It also created a new tax abatement program that excluded wind and solar. This year’s budget added another $5 billion to double the fund.

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County.

Credit: Dylan Baddour/Inside Climate News

Bluebonnet Electric Cooperative is currently completing construction on a 190 MW gas-fired peaker plant near the town of Maxwell in Caldwell County. Credit: Dylan Baddour/Inside Climate News

Among the lawmakers leading the effort to scale back the state’s deployment of renewables is state Sen. Lois Kolkhorst, a Republican from Brenham. One bill she co-sponsored, S.B. 819, aimed to create new siting rules for utility-scale renewable projects and would have required them to get permits from the Public Utility Commission that no other energy source—coal, gas or nuclear—needs. “It’s just something that is clearly meant to kneecap an industry,” Lewin said about the bill, which failed to pass.

Kolkhorst said the bill sought to balance the state’s need for power while respecting landowners across the state.

Former state Rep. John Davis, now a board member at Conservative Texans for Energy Innovation, said the session shows how renewables have become a red meat issue.

More than 20 years ago, Davis and Kolkhorst worked together in the Capitol as Texas deregulated its energy market, which encouraged renewables to enter the grid’s mix, he said. Now Davis herds sheep and goats on his family’s West Texas ranch, where seven wind turbines provide roughly 40 percent of their income.

He never could have dreamed how significant renewable energy would become for the state grid, he said. That’s why he’s disappointed with the direction the legislature is headed with renewables.

“I can’t think of anything more conservative, as a conservative, than wind and solar,” Davis said. “These are things God gave us—use them and harness them.”

A report published in April finds that targeted limitations on solar and wind development in Texas could increase electricity costs for consumers and businesses. The report, done by Aurora Energy Research for the Texas Association of Business, said restricting the further deployment of renewables would drive power prices up 14 percent by 2035.

“Texas is at a crossroads in its energy future,” said Olivier Beaufils, a top executive at Aurora Energy Research. “We need policies that support an all-of-the-above approach to meet the expected surge in power demand.”

Likewise, the commercial intelligence firm Wood Mackenzie expects the power demand from data centers to drive up prices of gas and wholesale consumer electricity.

Pollution from gas plants

Even when new power plants aren’t built on the site of data centers, they might still be developed because of demand from the server farms.

For example, in 2023, developer Marathon Digital started up a Bitcoin mine in the small town of Granbury on the site of the 1,100 MW Wolf Hollow II gas power plant. It held contracts to purchase 300 MW from the plant.

One year later, the power plant operator sought permits to install eight additional “peaker” gas turbines able to produce up to 352 MW of electricity. These small units, designed to turn on intermittently during hours of peak demand, release more pollution than typical gas turbines.

Those additional units would be approved to release 796,000 tons per year of greenhouse gases, 251 tons per year of nitrogen oxides and 56 tons per year of soot, according to permitting documents. That application is currently facing challenges from neighboring residents in state administrative courts.

About 150 miles away, neighbors are challenging another gas plant permit application in the tiny town of Blue. At 1,200 MW, the $1.2 billion plant proposed by Sandow Lakes Energy Co. would be among the largest in the state and would almost entirely serve private customers, likely including the large data centers that operate about 20 miles away.

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7.

Credit: Dylan Baddour/Inside Climate News

Travis Brown and Hugh Brown, no relation, stand by a sign marking the site of a proposed 1,200 MW gas-fired power plant in their town of Blue on May 7. Credit: Dylan Baddour/Inside Climate News

This plan bothers Hugh Brown, who moved out to these green, rolling hills of rural Lee County in 1975, searching for solitude. Now he lives on 153 wooded acres that he’s turned into a sanctuary for wildlife.

“What I’ve had here is a quiet, thoughtful life,” said Brown, skinny with a long grey beard. “I like not hearing what anyone else is doing.”

He worries about the constant roar of giant cooling fans, the bright lights overnight and the air pollution. According to permitting documents, the power plant would be authorized to emit 462 tons per year of ammonia gas, 254 tons per year of nitrogen oxides, 153 tons per year of particulate matter, or soot, and almost 18 tons per year of “hazardous air pollutants,” a collection of chemicals that are known to cause cancer or other serious health impacts.

It would also be authorized to emit 3.9 million tons of greenhouse gases per year, about as much as 72,000 standard passenger vehicles.

“It would be horrendous,” Brown said. “There will be a constant roaring of gigantic fans.”

In a statement, Sandow Lakes Energy denied that the power plant will be loud. “The sound level at the nearest property line will be similar to a quiet library,” the statement said.

Sandow Lakes Energy said the plant will support the local tax base and provide hundreds of temporary construction jobs and dozens of permanent jobs. Sandow also provided several letters signed by area residents who support the plant.

“We recognize the critical need for reliable, efficient, and environmentally responsible energy production to support our region’s growth and economic development,” wrote Nathan Bland, president of the municipal development district in Rockdale, about 20 miles from the project site.

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago.

Credit: Dylan Baddour/Inside Climate News

Brown stands next to a pond on his property ringed with cypress trees he planted 30 years ago. Credit: Dylan Baddour/Inside Climate News

Sandow says the plant will be connected to Texas’ public grid, and many supporting letters for the project cited a need for grid reliability. But according to permitting documents, the 1,200 MW plant will supply only 80 MW to the grid and only temporarily, with the rest going to private customers.

“Electricity will continue to be sold to the public until all of the private customers have completed projects slated to accept the power being generated,” said a permit review by the Texas Commission on Environmental Quality.

Sandow has declined to name those customers. However, the plant is part of Sandow’s massive, master-planned mixed-use development in rural Lee and Milam counties, where several energy-hungry tenants are already operating, including Riot Platforms, the largest cryptocurrency mine on the continent. The seven-building complex in Rockdale is built to use up to 700 MW, and in April, it announced the acquisition of a neighboring, 125 MW cryptocurrency mine, previously operated by Rhodium. Another mine by Bitmain, also one of the world’s largest Bitcoin companies, has 560 MW of operating capacity with plans to add 180 more in 2026.

In April, residents of Blue gathered at the volunteer fire department building for a public meeting with Texas regulators and Sandow to discuss questions and concerns over the project. Brown, owner of the wildlife sanctuary, spoke into a microphone and noted that the power plant was placed at the far edge of Sandow’s 33,000-acre development, 20 miles from the industrial complex in Rockdale but near many homes in Blue.

“You don’t want to put it up into the middle of your property where you could deal with the negative consequences,” Brown said, speaking to the developers. “So it looks to me like you are wanting to make money, in the process of which you want to strew grief in your path and make us bear the environmental costs of your profit.”

Inside Climate News’ Peter Aldhous contributed to this report.

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

What solar? What wind? Texas data centers build their own gas power plants Read More »

us-science-is-being-wrecked,-and-its-leadership-is-fighting-the-last-war

US science is being wrecked, and its leadership is fighting the last war


Facing an extreme budget, the National Academies hosted an event that ignored it.

WASHINGTON, DC—The general outline of the Trump administration’s proposed 2026 budget was released a few weeks back, and it included massive cuts for most agencies, including every one that funds scientific research. Late last week, those agencies began releasing details of what the cuts would mean for the actual projects and people they support. And the results are as bad as the initial budget had suggested: one-of-a-kind scientific experiment facilities and hardware retired, massive cuts in supported scientists, and entire areas of research halted.

And this comes in an environment where previously funded grants are being terminated, funding is being held up for ideological screening, and universities have been subjected to arbitrary funding freezes. Collectively, things are heading for damage to US science that will take decades to recover from. It’s a radical break from the trajectory science had been on.

That’s the environment that the US’s National Academies of Science found itself in yesterday while hosting the State of the Science event in Washington, DC. It was an obvious opportunity for the nation’s leading scientific organization to warn the nation of the consequences of the path that the current administration has been traveling. Instead, the event largely ignored the present to worry about a future that may never exist.

The proposed cuts

The top-line budget numbers proposed earlier indicated things would be bad: nearly 40 percent taken off the National Institutes of Health’s budget, the National Science Foundation down by over half. But now, many of the details of what those cuts mean are becoming apparent.

NASA’s budget includes sharp cuts for planetary science, which would be cut in half and then stay flat for the rest of the decade, with the Mars Sample Return mission canceled. All other science budgets, including Earth Science and Astrophysics, take similar hits; one astronomer posted a graphic showing how many present and future missions that would mean. Active missions that have returned unprecedented data, like Juno and New Horizons, would go, as would two Mars orbiters. As described by Science magazine’s news team, “The plans would also kill off nearly every major science mission the agency has not yet begun to build.”

A NASA graphic showing different missions focused on astrophysics. Red Xs have been superimposed on most of them.

A chart prepared by astronomer Laura Lopez showing just how many astrophysics missions will be cancelled. Credit: Laura Lopez

The National Science Foundation, which funds much of the US’s fundamental research, is also set for brutal cuts. Biology, engineering, and education will all be slashed by over 70 percent; computer science, math and physical science, and social and behavioral science will all see cuts of over 60 percent. International programs will take an 80 percent cut. The funding rate of grant proposals is expected to drop from 26 percent to just 7 percent, meaning the vast majority of grants submitted to the NSF will be a waste of time. The number of people involved in NSF-funded activities will drop from over 300,000 to just 90,000. Almost every program to broaden participation in science will be eliminated.

As for specifics, they’re equally grim. The fleet of research ships will essentially become someone else’s problem: “The FY 2026 Budget Request will enable partial support of some ships.” We’ve been able to better pin down the nature and location of gravitational wave events as detectors in Japan and Italy joined the original two LIGO detectors; the NSF will reverse that progress by shutting one of the LIGOs. The NSF’s contributions to detectors at the Large Hadron Collider will be cut by over half, and one of the two very large telescopes it was helping fund will be cancelled (say goodbye to the Thirty Meter Telescope). “Access to the telescopes at Kitt Peak and Cerro Tololo will be phased out,” and the NSF will transfer the facilities to other organizations.

The Department of Health and Human Services has been less detailed about the specific cuts its divisions will see, largely focusing on the overall numbers, which are down considerably. The NIH, which is facing a cut of over 40 percent, will be reorganized, with its 19 institutes pared down to just eight. This will result in some odd pairings, such as the dental and eye institutes ending up in the same place; genomics and biomedical imaging will likewise end up under the same roof. Other groups like the Centers for Disease Control and Prevention and the Food and Drug Administration will also face major cuts.

Issues go well beyond the core science agencies, as well. In the Department of Energy, funding for wind, solar, and renewable grid integration has been zeroed out, essentially ending all programs in this area. Hydrogen and fuel cells face a similar fate. Collectively, these had gotten over $600 billion dollars in 2024’s budget. Other areas of science at the DOE, such as high-energy physics, fusion, and biology, receive relatively minor cuts that are largely in line with the ones faced by administration priorities like fossil and nuclear energy.

Will this happen?

It goes without saying that this would amount to an abandonment of US scientific leadership at a time when most estimates of China’s research spending show it approaching US-like levels of support. Not only would it eliminate many key facilities, instruments, and institutions that have helped make the US a scientific powerhouse, but it would also block the development of newer and additional ones. The harms are so widespread that even topics that the administration claims are priorities would see severe cuts.

And the damage is likely to last for generations, as support is cut at every stage of the educational pipeline that prepares people for STEM careers. This includes careers in high-tech industries, which may require relocation overseas due to a combination of staffing concerns and heightened immigration controls.

That said, we’ve been here before in the first Trump administration, when budgets were proposed with potentially catastrophic implications for US science. But Congress limited the damage and maintained reasonably consistent budgets for most agencies.

Can we expect that to happen again? So far, the signs are not especially promising. The House has largely adopted the Trump administration’s budget priorities, despite the fact that the budget they pass turns its back on decades of supposed concerns about deficit spending. While the Senate has yet to take up the budget, it has also been very pliant during the second Trump administration, approving grossly unqualified cabinet picks such as Robert F. Kennedy Jr.

All of which would seem to call for the leadership of US science organizations to press the case for the importance of science funding to the US and highlight the damage that these cuts would cause. But, if yesterday’s National Academies event is anything to judge by, the leadership is not especially interested.

Altered states

As the nation’s premier science organization, and one that performs lots of analyses for the government, the National Academies would seem to be in a position to have its concerns taken seriously by members of Congress. And, given that the present and future of science in the US is being set by policy choices, a meeting entitled the State of the Science would seem like the obvious place to address those concerns.

If so, it was not obvious to Marcia McNutt, the president of the NAS, who gave the presentation. She made some oblique references to current problems, saying, “We are embarking on a radical new experiment in what conditions promote science leadership, with the US being the treatment group, and China as the control,” and acknowledged that “uncertainties over the science budgets for next year, coupled with cancellations of billions of dollars of already hard-won research grants, is causing an exodus of researchers.”

But her primary focus was on the trends that have been operative in science funding and policy leading up to but excluding the second Trump administration. McNutt suggested this was needed to look beyond the next four years. However, that ignores the obvious fact that US science will be fundamentally different if the Trump administration can follow through on its plans and policies; the trends that have been present for the last two decades will be irrelevant.

She was also remarkably selective about her avoidance of discussing Trump administration priorities. After noting that faculty surveys have suggested they spend roughly 40 percent of their time handling regulatory requirements, she twice mentioned that the administration’s anti-regulatory stance could be a net positive here (once calling it “an opportunity to help”). Yet she neglected to note that many of the abandoned regulations represent a retreat from science-driven policy.

McNutt also acknowledged the problem of science losing the bipartisan support it has enjoyed, as trust in scientists among US conservatives has been on a downward trend. But she suggested it was scientists’ responsibility to fix the problem, even though it’s largely the product of one party deciding it can gain partisan advantage by raising doubts about scientific findings in fields like climate change and vaccine safety.

The panel discussion that came after largely followed McNutt’s lead in avoiding any mention of the current threats to science. The lone exception was Heather Wilson, president of the University of Texas at El Paso and a former Republican member of the House of Representatives and secretary of the Air Force during the first Trump administration. Wilson took direct aim at Trump’s cuts to funding for underrepresented groups, arguing, “Talent is evenly distributed, but opportunity is not.” After arguing that “the moral authority of science depends on the pursuit of truth,” she highlighted the cancellation of grants that had been used to study diseases that are more prevalent in some ethnic groups, saying “that’s not woke science—that’s genetics.”

Wilson was clearly the exception, however, as the rest of the panel largely avoided direct mention of either the damage already done to US science funding or the impending catastrophe on the horizon. We’ve asked the National Academies’ leadership a number of questions about how it perceives its role at a time when US science is clearly under threat. As of this article’s publication, however, we have not received a response.

At yesterday’s event, however, only one person showed a clear sense of what they thought that role should be—Wilson again, whose strongest words were directed at the National Academies themselves, which she said should “do what you’ve done since Lincoln was president,” and stand up for the truth.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

US science is being wrecked, and its leadership is fighting the last war Read More »

science-phds-face-a-challenging-and-uncertain-future

Science PhDs face a challenging and uncertain future


Smaller post-grad classes are likely due to research budget cuts.

Credit: Thomas Barwick/Stone via Getty Images

Since the National Science Foundation first started collecting postgraduation data nearly 70 years ago, the number of PhDs awarded in the United States has consistently risen. Last year, more than 45,000 students earned doctorates in science and engineering, about an eight-fold increase compared to 1958.

But this level of production of science and engineering PhD students is now in question. Facing significant cuts to federal science funding, some universities have reduced or paused their PhD admissions for the upcoming academic year. In response, experts are beginning to wonder about the short and long-term effects those shifts will have on the number of doctorates awarded and the consequent impact on science if PhD production does drop.

Such questions touch on longstanding debates about academic labor. PhD training is a crucial part of nurturing scientific expertise. At the same time, some analysts have worried about an oversupply of PhDs in some fields, while students have suggested that universities are exploiting them as low-cost labor.

Many budding scientists go into graduate school with the goal of staying in academia and ultimately establishing their own labs. For at least 30 years, there has been talk of a mismatch between the number of doctorates and the limited academic job openings. According to an analysis conducted in 2013, only 3,000 faculty positions in science and engineering are added each year—even though more than 35,000 PhDs are produced in these fields annually.

Decades of this asymmetrical dynamic has created a hypercompetitive and high-pressure environment in the academic world, said Siddhartha Roy, an environmental engineer at Rutgers University who co-authored a recent study on tenure-track positions in engineering. “If we look strictly at academic positions, we have a huge oversupply, and it’s not sustainable,” he said.

But while the academic job market remains challenging, experts point out that PhD training also prepares individuals for career paths in industry, government, and other science and technology fields. If fewer doctorates are awarded and funding continues to be cut, some argue, American science will weaken.

“The immediate impact is there’s going to be less science,” said Donna Ginther, a social researcher who studies scientific labor markets at the University of Kansas. In the long run, that could mean scientific innovations, such as new drugs or technological advances, will stall, she said: “We’re leaving that scientific discovery on the table.”

Historically, one of the main goals of training PhD students has been to retain those scientists as future researchers in their respective fields. “Academia has a tendency to want to produce itself, reproduce itself,” said Ginther. “Our training is geared towards creating lots of mini-mes.”

But it is no secret in the academic world that tenure-track faculty positions are scarce, and the road to obtaining tenure is difficult. Although it varies across different STEM fields, the number of doctorates granted each year consistently surpass the number of tenure-track positions available. A survey gathering data from the 2022-2023 academic year, conducted by the Computing Research Association, found that around 11 percent of PhD graduates in computational science (for which employment data was reported) moved on to tenure-track faculty positions.

Roy found a similar figure for engineering: Around one out of every eight individuals who obtain their doctorate—12.5 percent—will eventually land a tenure-track faculty position, a trend that remained stable between 2014 and 2021, the last year for which his team analyzed data. The bottleneck in faculty positions, according to one recent study, leads about 40 percent of postdoctoral researchers to leave academia.

However, in recent years, researchers who advise graduate students have begun to acknowledge careers beyond academia, including positions in industry, nonprofits, government, consulting, science communication, and policy. “We need, as academics, need to take a broader perspective on what and how we prepare our students,” said Ginther.

As opposed to faculty positions, some of these labor markets can be more robust and provide plenty of opportunities for those with a doctorate, said Daniel Larremore, a computer scientist at the University of Colorado Boulder who studies academic labor markets, among other topics. Whether there is a mismatch between the number of PhDs and employment opportunities will depend on the subject of study and which fields are growing or shrinking, he added. For example, he pointed out that there is currently a boom in machine learning and artificial intelligence, so there is a lot of demand from industry for computer science graduates. In fact, commitments to industry jobs after graduation seem to be at a 30-year high.

But not all newly minted PhDs immediately find work. According to the latest NSF data, students in biological and biomedical sciences experienced a decline in job offers in the past 20 years, with 68 percent having definite commitments after graduating in 2023, compared to 72 percent in 2003. “The dynamics in the labor market for PhDs depends very much on what subject the PhD is in,” said Larremore.

Still, employment rates reflect that postgraduates benefit from greater opportunities compared to the general population. In 2024, the unemployment rate for college graduates with a doctoral degree in the US was 1.2 percent, less than half the national average at the time, according to the Bureau of Labor Statistics. In NSF’s recent survey, 74 percent of science and engineering graduating doctorates had definite commitments for employment or postdoctoral study or training positions, three points higher than it was in 2003.

“Overproducing for the number of academic jobs available? Absolutely,” said Larremore. “But overproducing for the economy in general? I don’t think so.”

The experts who spoke with Undark described science PhDs as a benefit for society: Ultimately, scientists with PhDs contribute to the economy of a nation, be it through academia or alternative careers. Many are now concerned about the impact that cuts to scientific research may have on that contribution. Already, there are reports of universities scaling back graduate student admissions in light of funding uncertainties, worried that they might not be able to cover students’ education and training costs. Those changes could result in smaller graduating classes in future years.

Smaller classes of PhD students might not be a bad thing for academia, given the limited faculty positions, said Roy. And for most non-academic jobs, Roy said, a master’s degree is more than sufficient. However, people with doctorates do contribute to other sectors like industry, government labs, and entrepreneurship, he added.

In Ginther’s view, fewer scientists with doctoral training could deal a devastating blow for the broader scientific enterprise. “Science is a long game, and the discoveries now take a decade or two to really hit the market, so it’s going to impinge on future economic growth.”

These long-term impacts of reductions in funding might be hard to reverse and could lead to the withering of the scientific endeavor in the United States, Larremore said: “If you have a thriving ecosystem and you suddenly halve the sunlight coming into it, it simply cannot thrive in the way that it was.”

This article was originally published on Undark. Read the original article.

Science PhDs face a challenging and uncertain future Read More »

some-parts-of-trump’s-proposed-budget-for-nasa-are-literally-draconian

Some parts of Trump’s proposed budget for NASA are literally draconian


“That’s exactly the kind of thing that NASA should be concentrating its resources on.”

Artist’s illustration of the DRACO nuclear rocket engine in space. Credit: Lockheed Martin

New details of the Trump administration’s plans for NASA, released Friday, revealed the White House’s desire to end the development of an experimental nuclear thermal rocket engine that could have shown a new way of exploring the Solar System.

Trump’s NASA budget request is rife with spending cuts. Overall, the White House proposes reducing NASA’s budget by about 24 percent, from $24.8 billion this year to $18.8 billion in fiscal year 2026. In previous stories, Ars has covered many of the programs impacted by the proposed cuts, which would cancel the Space Launch System rocket and Orion spacecraft and terminate numerous robotic science missions, including the Mars Sample Return, probes to Venus, and future space telescopes.

Instead, the leftover funding for NASA’s human exploration program would go toward supporting commercial projects to land on the Moon and Mars.

NASA’s initiatives to pioneer next-generation space technologies are also hit hard in the White House’s budget proposal. If the Trump administration gets its way, NASA’s Space Technology Mission Directorate, or STMD, will see its budget cut nearly in half, from $1.1 billion to $568 million.

Trump’s budget request isn’t final. Both Republican-controlled houses of Congress will write their own versions of the NASA budget, which must be reconciled before going to the White House for President Trump’s signature.

“The budget reduces Space Technology by approximately half, including eliminating failing space propulsion projects,” the White House wrote in an initial overview of the NASA budget request released May 2. “The reductions also scale back or eliminate technology projects that are not needed by NASA or are better suited to private sector research and development.”

Breathing fire

Last week, the White House and NASA put a finer point on these “failing space propulsion projects.”

“This budget provides no funding for Nuclear Thermal Propulsion and Nuclear Electric Propulsion projects,” officials wrote in a technical supplement released Friday detailing Trump’s NASA budget proposal. “These efforts are costly investments, would take many years to develop, and have not been identified as the propulsion mode for deep space missions. The nuclear propulsion projects are terminated to achieve cost savings and because there are other nearer-term propulsion alternatives for Mars transit.”

Foremost among these cuts, the White House proposes to end NASA’s participation in the Demonstration Rocket for Agile Cislunar Operations (DRACO) project. NASA said this proposal “reflects the decision by our partner to cancel” the DRACO mission, which would have demonstrated a nuclear thermal rocket engine in space for the first time.

NASA’s partner on the DRACO mission was the Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s research and development arm. A DARPA spokesperson confirmed the agency was closing out the project.

“DARPA has completed the agency’s involvement in the Demonstration Rocket for Agile Cislunar Orbit (DRACO) program and is transitioning its knowledge to our DRACO mission partner, the National Aeronautics and Space Administration (NASA), and to other potential DOD programs,” the spokesperson said in a response to written questions.

A nuclear rocket engine, which was to be part of NASA’s aborted NERVA program, is tested at Jackass Flats, Nevada, in 1967. Credit: Corbis via Getty Images)

Less than two years ago, NASA and DARPA announced plans to move forward with the roughly $500 million DRACO project, targeting a launch into Earth orbit aboard a traditional chemical rocket in 2027. “With the help of this new technology, astronauts could journey to and from deep space faster than ever, a major capability to prepare for crewed missions to Mars,” former NASA administrator Bill Nelson said at the time.

The DRACO mission would have consisted of several elements, including a nuclear reactor to rapidly heat up super-cold liquid hydrogen fuel stored in an insulated tank onboard the spacecraft. Temperatures inside the engine would reach nearly 5,000° Fahrenheit, boiling the hydrogen and driving the resulting gas through a nozzle, generating thrust. From the outside, the spacecraft’s design looks a lot like the upper stage of a traditional rocket. However, theoretically, a nuclear thermal rocket engine like DRACO’s would offer twice the efficiency of the highest-performing conventional rocket engines. That translates to significantly less fuel that a mission to Mars would have to carry across the Solar System.

Essentially, a nuclear thermal rocket engine combines the high-thrust capability of a chemical engine with some of the fuel efficiency benefits of low-thrust solar-electric engines. With DRACO, engineers sought hard data to verify their understanding of nuclear propulsion and wanted to make sure the nuclear engine’s challenging design actually worked. DRACO would have used high-assay low-enriched uranium to power its nuclear reactor.

Nuclear electric propulsion uses an onboard nuclear reactor to power plasma thrusters that create thrust by accelerating an ionized gas, like xenon, through a magnetic field. Nuclear electric propulsion would provide another leap in engine efficiency beyond the capabilities of a system like DRACO and may ultimately offer the most attractive option for enduring deep space transportation.

NASA led the development of DRACO’s nuclear rocket engine, while DARPA was responsible for the overall spacecraft design, operations, and the thorny problem of securing regulatory approval to launch a nuclear reactor into orbit. The reactor on DRACO would have launched in “cold” mode before activating in space, reducing the risk to people on the ground in the event of a launch accident. The Space Force agreed to pay for DRACO’s launch on a United Launch Alliance Vulcan rocket.

DARPA and NASA selected Lockheed Martin as the lead contractor for the DRACO spacecraft in 2023. BWX Technologies, a leader in the US nuclear industry, won the contract to develop the mission’s reactor.

“We received the notice from DARPA that it ended the DRACO program,” a Lockheed Martin spokesperson said. “While we’re disappointed with the decision, it doesn’t change our vision of how nuclear power influences how we will explore and operate in the vastness of space.”

Mired in the lab

More than 60 years have passed since a US-built nuclear reactor launched into orbit. Aviation Week reported in January that one problem facing DRACO engineers involved questions about how to safely test the nuclear thermal engine on the ground while adhering to nuclear safety protocols.

“We’re bringing two things together—space mission assurance and nuclear safety—and there’s a fair amount of complexity,” said Matthew Sambora, a DRACO program manager at DARPA, in an interview with Aviation Week. At the time, DARPA and NASA had already given up on a 2027 launch to concentrate on developing a prototype engine using helium as a propellant before moving on to an operational engine with more energetic liquid hydrogen fuel, Aviation Week reported.

Greg Meholic, an engineer at the Aerospace Corporation, highlighted the shortfall in ground testing capability in a presentation last year. Nuclear thermal propulsion testing “requires that engine exhaust be scrubbed of radiologics before being released,” he wrote. This requirement “could result in substantially large, prohibitively expensive facilities that take years to build and qualify.”

These safety protocols weren’t as stringent when NASA and the Air Force first pursued nuclear propulsion in the 1960s. Now, the first serious 21st-century effort to fly a nuclear rocket engine in space is grinding to a halt.

“Given that our near-term human exploration and science needs do not require nuclear propulsion, current demonstration projects will end,” wrote Janet Petro, NASA’s acting administrator, in a letter accompanying the Trump administration’s budget release last week.

This figure illustrates the major elements of a typical nuclear thermal rocket engine. Credit: NASA/Glenn Research Center

NASA’s 2024 budget allocated $117 million for nuclear propulsion work, an increase from $91 million the previous year. Congress added more funding for NASA’s nuclear propulsion programs over the Biden administration’s proposed budget in recent years, signaling support on Capitol Hill that may save at least some nuclear propulsion initiatives next year.

It’s true that nuclear propulsion isn’t required for any NASA missions currently on the books. Today’s rockets are good at hurling cargo and people off planet Earth, but once a spacecraft arrives in orbit, there are several ways to propel it toward more distant destinations.

NASA’s existing architecture for sending astronauts to the Moon uses the SLS rocket and Orion spacecraft, both of which are proposed for cancellation and look a lot like the vehicles NASA used to fly astronauts to the Moon more than 50 years ago. SpaceX’s reusable Starship, designed with an eye toward settling Mars, uses conventional chemical propulsion, with methane and liquid oxygen propellants that SpaceX one day hopes to generate on the surface of the Red Planet.

So NASA, SpaceX, and other companies don’t need nuclear propulsion to beat China back to the Moon or put the first human footprints on Mars. But there’s a broad consensus that in the long run, nuclear rockets offer a better way of moving around the Solar System.

The military’s motive for funding nuclear thermal propulsion was its potential for becoming a more efficient means of maneuvering around the Earth. Many of the military’s most important spacecraft are limited by fuel, and the Space Force is investigating orbital refueling and novel propulsion methods to extend the lifespan of satellites.

NASA’s nuclear power program is not finished. The Trump administration’s budget proposal calls for continued funding for the agency’s fission surface power program, with the goal of fielding a nuclear reactor that could power a base on the surface of the Moon or Mars. Lockheed and BWXT, the contractors involved in the DRACO mission, are part of the fission surface power program.

There is some funding in the White House’s budget request for tech demos using other methods of in-space propulsion. NASA would continue funding experiments in long-term storage and transfer of cryogenic propellants like liquid methane, liquid hydrogen, and liquid oxygen. These joint projects between NASA and industry could pave the way for orbital refueling and orbiting propellant depots, aligning with the direction of companies like SpaceX, Blue Origin, and United Launch Alliance.

But many scientists and engineers believe nuclear propulsion offers the only realistic path for a sustainable campaign ferrying people between the Earth and Mars. A report commissioned by NASA and the National Academies concluded in 2021 that an aggressive tech-development program could advance nuclear thermal propulsion enough for a human expedition to Mars in 2039. The prospects for nuclear electric propulsion were murkier.

This would have required NASA to substantially increase its budget for nuclear propulsion immediately, likely by an order of magnitude beyond the agency’s baseline funding level, or to an amount exceeding $1 billion per year, said Bobby Braun, co-chair of the National Academies report, in a 2021 interview with Ars. That didn’t happen.

Going nuclear

The interplanetary transportation architectures envisioned by NASA and SpaceX will, at least initially, primarily use chemical propulsion for the cruise between Earth and Mars.

Kurt Polzin, chief engineer of NASA’s space nuclear propulsion projects, said significant technical hurdles stand in the way of any propulsion system selected to power heavy cargo and humans to Mars.

“Anybody who says that they’ve solved the problem, you don’t know that because you don’t have enough data,” Polzin said last week at the Humans to the Moon and Mars Summit in Washington.

“We know that to do a Mars mission with a Starship, you need lots of refuelings at Earth, you need lots of refuelings at Mars, which you have to send in advance,” Polzin said. “You either need to send that propellant in advance or send a bunch of material and hardware to the surface to be set up and robotically make your propellant in situ while you’re there.”

Elon Musk’s SpaceX is betting on chemical propulsion for round-trip flights to Mars with its Starship rocket. This will require assembly of propellant-generation plants on the Martian surface. Credit: SpaceX

Last week, SpaceX founder Elon Musk outlined how the company plans to land its first Starships on Mars. His roadmap includes more than 100 cargo flights to deliver equipment to produce methane and liquid oxygen propellants on the surface of Mars. This is necessary for any Starship to launch off the Red Planet and return to Earth.

“You can start to see that this starts to become a Rube Goldberg way to do Mars,” Polzin said. “Will I say it can’t work? No, but will I say that it’s really, really difficult and challenging. Are there a lot of miracles to make it work? Absolutely. So the notion that SpaceX has solved Mars or is going to do Mars with Starship, I would challenge that on its face. I don’t think the analysis and the data bear that out.”

Engineers know how methane-fueled rocket engines perform in space. Scientists have created liquid oxygen and liquid methane since the late 1800s. Scaling up a propellant plant on Mars to produce thousands of tons of cryogenic liquids is another matter. In the long run, this might be a suitable solution for Musk’s vision of creating a city on Mars, but it comes with immense startup costs and risks. Still, nuclear propulsion is an entirely untested technology as well.

“The thing with nuclear is there are challenges to making it work, too,” Polzin said. “However, all of my challenges get solved here at Earth and in low-Earth orbit before I leave. Nuclear is nice. It has a higher specific impulse, especially when we’re talking about nuclear thermal propulsion. It has high thrust, which means it will get our astronauts there and back quickly, but I can carry all the fuel I need to get back with me, so I don’t need to do any complicated refueling at Mars. I can return without having to make propellant or send any pre-positioned propellant to get back.”

The tug of war over nuclear propulsion is nothing new. The Air Force started a program to develop reactors for nuclear thermal rockets at the height of the Cold War. NASA took over the Air Force’s role a few years later, and the project proceeded into the next phase, called the Nuclear Engine for Rocket Vehicle Application (NERVA). President Richard Nixon ultimately canceled the NERVA project in 1973 after the government had spent $1.4 billion on it, equivalent to about $10 billion in today’s dollars. Despite nearly two decades of work, NERVA never flew in space.

Doing the hard things

The Pentagon and NASA studied several more nuclear thermal and nuclear electric propulsion initiatives before DRACO. Today, there’s a nascent commercial business case for compact nuclear reactors beyond just the government. But there’s scant commercial interest in mounting a full-scale nuclear propulsion demonstration solely with private funding.

Fred Kennedy, co-founder and CEO of a space nuclear power company called Dark Fission, said most venture capital investors lack the appetite to wait for financial returns in nuclear propulsion that they may see in 15 or 20 years.

“It’s a truism: Space is hard,” said Kennedy, a former DARPA program manager. “Nuclear turns out to be hard for reasons we can all understand. So space-nuclear is hard-squared, folks. As a result, you give this to your average associate at a VC firm and they get scared quick. They see the moles all over your face, and they run away screaming.”

But commercial launch costs are coming down. With sustained government investment and streamlined regulations, “this is the best chance we’ve had in a long time” to get a nuclear propulsion system into space, Kennedy said.

Technicians prepare a nozzle for a prototype nuclear thermal rocket engine in 1964. Credit: NASA

“I think, right now, we’re in this transitional period where companies like mine are going have to rely on some government largesse, as well as hopefully both commercial partnerships and honest private investment,” Kennedy said. “Three years ago, I would have told you I thought I could have done the whole thing with private investment, but three years have turned my hair white.”

Those who share Kennedy’s view thought they were getting an ally in the Trump administration. Jared Isaacman, the billionaire commercial astronaut Trump nominated to become the next NASA administrator, promised to prioritize nuclear propulsion in his tenure as head of the nation’s space agency.

During his Senate confirmation hearing in April, Isaacman said NASA should turn over management of heavy-lift rockets, human-rated spacecraft, and other projects to commercial industry. This change, he said, would allow NASA to focus on the “near-impossible challenges that no company, organization, or agency anywhere in the world would be able to undertake.”

The example Isaacman gave in his confirmation hearing was nuclear propulsion. “That’s something that no company would ever embark upon,” he told lawmakers. “There is no obvious economic return. There are regulatory challenges. That’s exactly the kind of thing that NASA should be concentrating its resources on.”

But the White House suddenly announced on Saturday that it was withdrawing Isaacman’s nomination days before the Senate was expected to confirm him for the NASA post. While there’s no indication that Trump’s withdrawal of Isaacman had anything to do with any specific part of the White House’s funding plan, his removal leaves NASA without an advocate for nuclear propulsion and a number of other projects falling under the White House’s budget ax.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Some parts of Trump’s proposed budget for NASA are literally draconian Read More »

milky-way-galaxy-might-not-collide-with-andromeda-after-all

Milky Way galaxy might not collide with Andromeda after all

100,000 computer simulations reveal Milky Way’s fate—and it might not be what we thought.

It’s been textbook knowledge for over a century that our Milky Way galaxy is doomed to collide with another large spiral galaxy, Andromeda, in the next 5 billion years and merge into one even bigger galaxy. But a fresh analysis published in the journal Nature Astronomy is casting that longstanding narrative in a more uncertain light. The authors conclude that the likelihood of this collision and merger is closer to the odds of a coin flip, with a roughly 50 percent probability that the two galaxies will avoid such an event during the next 10 billion years.

Both the Milky Way and the Andromeda galaxies (M31) are part of what’s known as the Local Group (LG), which also hosts other smaller galaxies (some not yet discovered) as well as dark matter (per the prevailing standard cosmological model). Both already have remnants of past mergers and interactions with other galaxies, according to the authors.

“Predicting future mergers requires knowledge about the present coordinates, velocities, and masses of the systems partaking in the interaction,” the authors wrote. That involves not just the gravitational force between them but also dynamical friction. It’s the latter that dominates when galaxies are headed toward a merger, since it causes galactic orbits to decay.

This latest analysis is the result of combining data from the Hubble Space Telescope and the European Space Agency’s (ESA) Gaia space telescope to perform 100,000 Monte Carlo computer simulations, taking into account not just the Milky Way and Andromeda but the full LG system. Those simulations yielded a very different prediction: There is approximately a 50/50 chance of the galaxies colliding within the next 10 billion years. There is still a 2 percent chance that they will collide in the next 4 to 5 billion years. “Based on the best available data, the fate of our galaxy is still completely open,” the authors concluded.

Milky Way galaxy might not collide with Andromeda after all Read More »