Science

nasa-tested-a-new-sls-booster-that-may-never-fly,-and-the-end-of-it-blew-off

NASA tested a new SLS booster that may never fly, and the end of it blew off


NASA didn’t want to say much about one of the tests, and the other one lost its nozzle.

An uncontained plume of exhaust appeared near the nozzle of an SLS solid rocket booster moments before its nozzle was destroyed during a test-firing Thursday. Credit: NASA

NASA’s Space Launch System appears to have a finite shelf life. The Trump administration wants to cancel it after just three launches, while the preliminary text of a bill making its way through Congress would extend it to five flights.

But chances are low the Space Launch System will make it to nine flights, and if it does, it’s questionable that it would reach that point before 2040. The SLS rocket is a core piece of NASA’s plan to return US astronauts to the Moon under the Artemis program, but the White House seeks to cancel the program in favor of cheaper commercial alternatives.

For the second time in less than a week, NASA test-fired new propulsion hardware Thursday that the agency would need to keep SLS alive. Last Friday, a new liquid-fueled RS-25 engine ignited on a test stand at NASA’s Stennis Space Center in Mississippi. The hydrogen-fueled engine is the first of its kind to be manufactured since the end of the Space Shuttle program. This particular RS-25 engine is assigned to power the fifth flight of the SLS rocket, a mission known as Artemis V.

Then, on Thursday of this week, NASA and Northrop Grumman test-fired a new solid rocket booster in Utah. This booster features a new design that NASA would use to power SLS rockets beginning with the ninth mission, or Artemis IX. The motor tested on Thursday isn’t flight-worthy. It’s a test unit that engineers will use to gather data on the rocket’s performance.

While the engine test in Mississippi apparently went according to plan, the ground firing of the new solid rocket booster didn’t go quite as smoothly. Less than two minutes into the burn, the motor’s exhaust nozzle violently shattered into countless shards of debris. You can watch the moment in the YouTube video below.

At the start of the program nearly 15 years ago, NASA and its backers in Congress pitched the SLS rocket as the powerhouse behind a new era of deep space exploration. The Space Launch System, they said, would have the advantage of recycling old space shuttle engines and boosters, fast-tracking the new rocket’s path to the launch pad for less money than the cost of an all-new vehicle.

That didn’t pan out. Each Artemis mission costs $4.2 billion per flight, and that’s with shuttle-era engines and boosters that NASA and its contractors already have in their inventories. NASA’s 16 leftover shuttle main engines are enough for the first four SLS flights. NASA has leftover parts for eight pairs of solid rocket boosters.

It has been 10 years

Recognizing that shuttle-era parts will eventually run out, NASA signed a contract with Aerojet Rocketdyne to set the stage for the production of new RS-25 engines in 2015. NASA later ordered an initial batch of six RS-25 engines from Aerojet, then added 18 more to the order in 2020, at a price of about $100 million per engine. NASA and its contractor aim to reduce the cost to $70 million per engine, but even that figure is many times the cost of engines of comparable size and power: Blue Origin’s BE-4 and SpaceX’s Raptor.

Finally, NASA test-fired a new flight-rated RS-25 engine for the first time last week at Stennis Space Center. The agency has often provided a livestream of its engine tests at Stennis, but it didn’t offer the public any live video. And this particular test was a pretty big deal. L3Harris, which acquired Aerojet Rocketdyne in 2023, has finally reactivated the RS-25 production line after a decade and billions of dollars of funding.

In fact, NASA made no public statement about the RS-25 test until Monday, and the agency didn’t mention its assignment to fly on the Artemis V mission. If the Trump administration gets its way, the engine will never fly. Maybe that’s fine, but after so long with so much taxpayer investment, this is a milestone worth publicizing, if not celebrating.

L3Harris issued a press release Tuesday confirming the engine’s planned use on the fifth SLS mission. The engine completed a 500-second acceptance test, throttling up to 111 percent of rated thrust, demonstrating more power than engines that flew on the space shuttle or on the first SLS launch in 2022.

A new RS-25 engine, No. 20001, was installed on its test stand in Mississippi earlier this year. Credit: NASA

“This successful acceptance test shows that we’ve been able to replicate the RS-25’s performance and reliability, while incorporating modern manufacturing techniques and upgraded components such as the main combustion chamber, nozzle, and pogo accumulator assembly,” said Kristin Houston, president of space propulsion and power systems at Aerojet Rocketdyne, L3Harris. “Our propulsion technology is key to ensuring the United States leads in lunar exploration, creates a sustained presence on the Moon and does not cede this strategic frontier to other nations.”

The test-firing last Friday came a few days before the 50th anniversary of the first space shuttle main engine test at Stennis on June 24, 1975. That engine carried the serial number 0001. The new RS-25 engine is designated No. 20001.

Watch out

NASA followed last week’s low-key engine test with the test-firing of a solid-fueled booster at Northrop Grumman’s rocket test site in Promontory, Utah, on Thursday. Held in place on its side, the booster produced 3.9 million pounds of thrust, outclassing the power output of the existing boosters assigned to the first eight SLS missions.

Unlike the RS-25 firing at Stennis, NASA chose to broadcast the booster test. Everything appeared to go well until 1 minute and 40 seconds into the burn, when a fiery plume of super-hot exhaust appeared to burn through part of the booster’s structure just above the nozzle. Moments later, the nozzle disintegrated.

Solid rocket boosters can’t be turned off after ignition, and for better or worse, the motor continued firing until it ran out of propellant about 30 seconds later. The rocket sparked a fire in the hills overlooking the test stand.

This was the first test-firing of the Booster Obsolescence and Life Extension (BOLE) program, which aims to develop a higher-performance solid rocket booster for SLS missions. NASA awarded Northrop Grumman a $3.2 billion contract in 2021 to produce boosters with existing shuttle parts for five SLS missions (Artemis IV-VIII), and design, develop, and test a new booster design for Artemis IX.

The boosters produce more than 75 percent of the thrust required to propel the SLS rocket off the launch pad with NASA’s crewed Orion spacecraft on top. Four RS-25 engines power the core stage, collectively generating more than 2 million pounds of thrust.

Northrop Grumman calls the new booster “the largest and most powerful segmented solid rocket motor ever built for human spaceflight.”

One of the most significant changes with the BOLE booster design is that it replaces shuttle-era steel cases with carbon-fiber composite cases. Northrop says the new cases are lighter and stronger. It also replaces the booster’s hydraulic thrust vector control steering system with an electronic system. The propellant packed inside the booster is also different, using a mix that Northrop packs inside its commercial rocket motors instead of the recipe used for the space shuttle.

Northrop Grumman has had a tough time with rocket nozzles in recent years. In 2019, a test motor for the company’s now-canceled Omega rocket lost its nozzle during a test-firing in Utah. Then, last year, a smaller Northrop-made booster flying on United Launch Alliance’s Vulcan rocket lost its nozzle in flight. Vulcan’s guidance system and main engines corrected for the problem, and the rocket still achieved its planned orbit.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

NASA tested a new SLS booster that may never fly, and the end of it blew off Read More »

changing-one-gene-can-restore-some-tissue-regeneration-to-mice

Changing one gene can restore some tissue regeneration to mice

Regeneration is a trick many animals, including lizards, starfish, and octopuses, have mastered. Axolotls, a salamander species originating in Mexico, can regrow pretty much everything from severed limbs, to eyes and parts of brain, to the spinal cord. Mammals, though, have mostly lost this ability somewhere along their evolutionary path. Regeneration persisted, in a limited number of tissues, in just a few mammalian species like rabbits or goats.

“We were trying to learn how certain animals lost their regeneration capacity during evolution and then put back the responsible gene or pathway to reactivate the regeneration program,” says Wei Wang, a researcher at the National Institute of Biological Sciences in Beijing. Wang’s team has found one of those inactive regeneration genes, activated it, and brought back a limited regeneration ability to mice that did not have it before.

Of mice and bunnies

The idea Wang and his colleagues had was a comparative study of how the wound healing process works in regenerating and non-regenerating mammalian species. They chose rabbits as their regenerating mammals and mice as the non-regenerating species. As the reference organ, the team picked the ear pinna. “We wanted a relatively simple structure that was easy to observe and yet composed of many different cell types,” Wang says. The test involved punching holes in the ear pinna of rabbits and mice and tracking the wound-repairing process.

The healing process began in the same way in rabbits and mice. Within the first few days after the injury, a blastema—a mass of heterogeneous cells—formed at the wound site. “Both rabbits and mice will heal the wounds after a few days,” Wang explains. “But between the 10th and 15th day, you will see the major difference.” In this timeframe, the earhole in rabbits started to become smaller. There were outgrowths above the blastema—the animals were producing more tissue. In mice, on the other hand, the healing process halted completely, leaving a hole in the ear.

Changing one gene can restore some tissue regeneration to mice Read More »

testing-ancient-paleolithic-migration-with-a-replica-canoe

Testing ancient Paleolithic migration with a replica canoe

(Left) GPS tracking and modeling of ocean currents toward the end of the experimental voyage. (Right) The team on the water around the time of the left image.

(Left) GPS tracking and modeling of ocean currents toward the end of the experimental voyage. (Right) The team on the water around the time of the left image. Credit: Kaifu et al., 2025/CC-By-ND

At the 30-hour mark, the captain ordered the entire crew to rest, letting the dugout drift freely for a while, which fortunately brought them closer to Yonaguni Island. At hour 40, the island’s silhouette was visible, and over the next five hours, the crew was able to navigate the strong tidal flow along the coast until they reached their landing site: Nama Beach. So the experimental voyage was a success, augmented by the numerical simulations to demonstrate that the boat could make similar voyages from different departure points across both modern and late-Pleistocene oceans.

Granted, it was not possible to recreate Paleolithic conditions perfectly on a modern ocean. The crew first spotted the island because of its artificial lights, although by that time, they were on track navigationally. They were also accompanied by escort ships to ensure the crew’s safety, supplying fresh water twice during the voyage. But the escort ships did not aid with navigation or the dugout captain’s decision-making, and the authors believe that any effects were likely minimal. The biggest difference was the paddlers’ basic modern knowledge of local geography, which helped them develop a navigation plan—an unavoidable anachronism, although the crew did not rely on compasses, GPS, or watches during the voyage.

“Scientists try to reconstruct the processes of past human migrations, but it is often difficult to examine how challenging they really were,” said Kaifu. “One important message from the whole project was that our Paleolithic ancestors were real challengers. Like us today, they had to undertake strategic challenges to advance. For example, the ancient Polynesian people had no maps, but they could travel almost the entire Pacific. There are a variety of signs on the ocean to know the right direction, such as visible land masses, heavenly bodies, swells and winds. We learned parts of such techniques ourselves along the way.”

DOI: “Traversing the Kuroshio: Paleolithic migration across one of the world’s strongest ocean currents,” Science Advances, 2025. 10.1126/sciadv.adv5508  (About DOIs).

DOI: “Palaeolithic seafaring in East Asia: an experimental test of the dugout canoe hypothesis,” Science Advances, 2025. 10.1126/sciadv.adv5507  (About DOIs).

Testing ancient Paleolithic migration with a replica canoe Read More »

researchers-develop-a-battery-cathode-material-that-does-it-all

Researchers develop a battery cathode material that does it all

Battery electrode materials need to do a lot of things well. They need to be conductors to get charges to and from the ions that shuttle between the electrodes. They also need to have an open structure that allows the ions to move around before they reach a site where they can be stored. The storage of lots of ions also causes materials to expand, creating mechanical stresses that can cause the structure of the electrode material to gradually decay.

Because it’s hard to get all of these properties from a single material, many electrodes are composite materials, with one chemical used to allow ions into and out of the electrode, another to store them, and possibly a third that provides high conductivity. Unfortunately, this can create new problems, with breakdowns at the interfaces between materials slowly degrading the battery’s capacity.

Now, a team of researchers is proposing a material that seemingly does it all. It’s reasonably conductive, it allows lithium ions to move around and find storage sites, and it’s made of cheap and common elements. Perhaps best of all, it undergoes self-healing, smoothing out damage across charge/discharge cycles.

High capacity

The research team, primarily based in China, set out to limit the complexity of cathodes. “Conventional composite cathode designs, which typically incorporate a cathode active material, catholyte, and electronic conducting additive, are often limited by the substantial volume fraction of electrochemically inactive components,” the researchers wrote. The solution, they reasoned, was to create an all-in-one material that gets rid of most of these materials.

A number of papers had reported good luck with chlorine-based chemicals, which allowed ions to move readily through the material but didn’t conduct electricity very well. So the researchers experimented with pre-loading one of these materials with lithium. And they focused on iron chloride since it’s a very cheap material.

Researchers develop a battery cathode material that does it all Read More »

today!-ars-live:-what’s-up-with-the-sudden-surge-in-temperatures?

Today! Ars Live: What’s up with the sudden surge in temperatures?

At 1pm on Thursday, we encourage you to join us for a live chat with Zeke Hausfather, a climate scientist and researcher at Berkeley Earth. We’ll talk a bit about how he got into climate science and ended up at Berkeley Earth and the role that organization plays in the world of climate science. It was launched by a physicist who was somewhat skeptical of the work being done by climate scientists, but it has evolved into one of the key groups that does the math needed to track the planet’s temperatures.

For the past couple of years, those temperatures have seen a remarkable rise to record highs, at one point setting a yearlong string where every month set a record for the warmest instance of that month on record. The rise leaves us at risk of exceeding key climate targets much earlier than expected and has left the climate science community scrambling to explain the intensity of the heat. So we plan to ask Zeke a bit about what scientists are thinking about the dramatic nature of these changes, attempts to explore the relationship between temperatures, and things like tipping points and individual weather events.

And all that leads to the key question: What does this tell us about where our climate is likely to go over the rest of this century?

After that, we’d like to turn things over to your questions. Is there anything you’ve always wanted to know about climate science but didn’t know who to ask? Zeke may be your guy—and if not, then he almost certainly knows who is. So please join us for this discussion, happening Thursday, June 26, at 1 pm US Eastern Time.

Add to Google Calendar | Add to calendar (.ics download)

Today! Ars Live: What’s up with the sudden surge in temperatures? Read More »

during-a-town-hall-wednesday,-nasa-officials-on-stage-looked-like-hostages

During a town hall Wednesday, NASA officials on stage looked like hostages


A Trump appointee suggests NASA may not have a new administrator until next year.

NASA press secretary Bethany Stevens, acting administrator Janet Petro, chief of staff Brian Hughes, associate administrator Vanessa Wyche, and deputy associate administrator Casey Swails held a town hall with NASA employees Wednesday. Credit: NASA

The four people at the helm of America’s space agency held a town hall meeting with employees Wednesday, fielding questions about downsizing, layoffs, and proposed budget cuts that threaten to undermine NASA’s mission and prestige.

Janet Petro, NASA’s acting administrator, addressed questions from an auditorium at NASA Headquarters in Washington, DC. She was joined by Brian Hughes, the agency’s chief of staff, a political appointee who was formerly a Florida-based consultant active in city politics and in Donald Trump’s 2024 presidential campaign. Two other senior career managers, Vanessa Wyche and Casey Swails, were also on the stage.

They tried to put a positive spin on the situation at NASA. Petro, Wyche, and Swails are civil servants, not Trump loyalists. None of them looked like they wanted to be there. The town hall was not publicized outside of NASA ahead of time, but live video of the event was available—unadvertised—on an obscure NASA streaming website. The video has since been removed.

8 percent down

NASA’s employees are feeling the pain after the White House proposed a budget cut of nearly 25 percent in fiscal year 2026, which begins October 1. The budget request would slash NASA’s topline budget by nearly 25 percent, from $24.8 billion to $18.8 billion. Adjusted for inflation, this would be the smallest NASA budget since 1961, when the first American launched into space.

“The NASA brand is really strong still, and we have a lot of exciting missions ahead of us,” Petro said. “So, I know it’s a hard time that we’re going to be navigating, but again, you have my commitment that I’m here and I will share all of the information that I have when I get it.”

It’s true that NASA employees, along with industry officials and scientists who regularly work with the agency, are navigating through what would most generously be described as a period of great uncertainty. The perception among NASA’s workforce is far darker. “NASA is f—ed,” one current leader in the agency told Ars a few weeks ago, soon after President Trump rescinded his nomination of billionaire businessman and commercial astronaut Jared Isaacman to be the agency’s next administrator.

Janet Petro, NASA’s acting administrator, is seen in 2020 at Kennedy Space Center in Florida. Credit: NASA/Kim Shiflett

Before the White House released its detailed budget proposal in May, NASA and other federal agencies were already scrambling to respond to the Trump administration’s directives to shrink the size of the government. While NASA escaped the mass layoffs of probationary employees that affected other departments, the space agency offered buyouts and incentives for civil servants to retire early or voluntarily leave their posts.

About 900 NASA employees signed up for the first round of the government’s “deferred resignation” program. Casey Swails, NASA’s deputy associate administrator, said Wednesday that number is now up to 1,500 after NASA announced another chance for employees to take the government’s deferred resignation offer. This represents about 8 percent of NASA’s workforce, and the window for employees to apply runs until July 25.

One takeaway from Wednesday’s town hall is that at least some NASA leaders want to motivate more employees to resign voluntarily. Hughes said a “major reason” for luring workers to leave the agency is to avoid “being in a spot where we have to do the involuntary options.”

Rumors of these more significant layoffs, or reductions in force, have hung over NASA for several months. If that happens, workers may not get the incentives the government is offering today to those who leave the agency on their own. Swails said NASA isn’t currently planning any such layoff, although she left the door open for the situation to change: “We’re doing everything we can to avoid going down that path.”

Ultimately, it will depend on how many employees NASA can get to resign on their own. If it’s not enough, layoffs may still be an option.

Many questions, few answers

Nearly all of the questions employees addressed to NASA leadership Wednesday were submitted anonymously, and in writing: When might Trump nominate someone for NASA administrator to take Isaacman’s place? Will any of NASA’s 10 field centers be closed? What is NASA going to do about Trump’s budget proposal, particularly its impact on science missions?

Their responses to these questions, in order: Probably not any time soon, maybe, and nothing.

The Trump administration selected Petro, an engineer and former Army helicopter pilot, to become acting head of NASA on Inauguration Day in January. Bill Nelson, who served as a Florida senator until 2019, resigned the NASA administrator job when former President Biden left the White House.

Petro was previously director of NASA’s Kennedy Space Center since 2021, and before that, she was deputy director of the Florida spaceport for 14 years. She leapfrogged NASA’s top civil servant, associate administrator Jim Free, to become acting administrator in January. Free retired from the agency in February. Before the presidential election last year, Free advocated for the next administration to stay the course with NASA’s Artemis program.

But that’s not what the Trump administration wants to do. The White House seeks to cancel the Space Launch System rocket and Orion spacecraft, both core elements of the Artemis program to return astronauts to the Moon after two more flights. Under the new plan, NASA would procure commercial transportation to ferry crews to the Moon and Mars in a similar way to how the agency buys rides for its astronauts to the International Space Station in low-Earth orbit.

NASA’s Curiosity rover captured images to create this selfie mosaic on the surface of Mars in 2015. If implemented as written, the Trump budget proposal would mark the first time in 30 years that NASA does not have a Mars lander in development. The agency would instead turn to commercial companies to demonstrate they can deliver payloads, and eventually humans, to the red planet.

The Trump administration’s statements on space policy have emphasized the longer-term goal of human missions to Mars. The White House’s plans for what NASA will do at the Moon after the Artemis program’s first landing are still undefined.

Petro has kept a low profile since becoming NASA’s temporary chief executive five months ago. If Trump moved forward with Isaacman’s nomination, he would likely be NASA administrator today. The Senate was a few days away from confirming Isaacman when Trump pulled his nomination, apparently for political reasons. The White House withdrew the nomination the day after Elon Musk, who backed Isaacman to take the top job at NASA, left the Trump administration.

Who’s running NASA?

Now, Petro could serve out the year as NASA’s acting administrator. Petro is well-regarded at Kennedy Space Center, where she was a fixture in the center’s headquarters building for nearly 20 years. But she lacks a political constituency in the Trump administration and isn’t empowered to make major policy decisions. The budget cuts proposed for NASA came from the White House’s Office of Management and Budget, not from within the agency itself.

President Trump has the reins on the process to select the next NASA administrator. Trump named Isaacman for the office in December, more than a month before his inauguration, and the earliest any incoming president has nominated a NASA administrator. Musk had close ties to Trump then, and a human mission to Mars got a mention in Trump’s inauguration speech.

But space issues seem to have fallen far down Trump’s list of priorities. Hughes, who got his job at NASA in part due to his political connections, suggested it might be a while before Trump gets around to selecting another NASA administrator nominee.

“I think the best guess would tell you that it’s hard to imagine it happening before the next six months, and could perhaps go longer than that into the eight- or nine-month range, but that’s purely speculation,” Hughes said, foreseeing impediments such as the large number of other pending nominations for posts across the federal government and high-priority negotiations with Congress over the federal budget.

Congress is also expected to go on recess in August, so the earliest a NASA nominee might get a confirmation hearing is this fall. Then, the Senate must vote to confirm the nominee before they can take office.

The timeline of Isaacman’s nomination for NASA administrator is instructive. Trump nominated Isaacman in December, and his confirmation hearing was in April. He was on the cusp of a confirmation vote in early June when Trump withdrew his nomination on May 31.

As NASA awaits a leader with political backing, Petro said the agency is undergoing an overhaul to make it “leaner and more agile.” This is likely to result in office closures, and Hughes indicated NASA might end up shuttering entire field centers.

“To the specific question, will they be closed or consolidated? I don’t think we’re there yet to answer that question, but it is actively a part of the conversation we’re having as we go step-by-step through this,” Hughes said.

What can $4 billion buy you?

While Trump’s budget proposal includes robust funding for human space exploration, it’s a different story for most of the rest of NASA. The agency’s science budget would be cut in half to approximately $3.9 billion. NASA’s technology development division would also be reduced by 50 percent.

If the White House gets its way, NASA would scale back research on the International Space Station and cancel numerous robotic missions in development or already in space. The agency would terminate missions currently exploring Jupiter, on the way to study an asteroid, and approaching interstellar space. It would shut down the largest X-ray space telescope ever built and the only one in its class likely to be operating for the next 10 years.

“There’s a lot of science that can still be done with $4 billion,” Petro said. “How we do science, and how we do partnerships, may change in the future to sort of multiply what we’re doing.”

These partnerships might include asking academic institutions or wealthy benefactors to pitch in money to fund science projects at NASA. The agency might also invite commercial companies to play bigger roles in NASA robotic missions, which are typically owned by the government.

This view of Jupiter’s turbulent atmosphere from NASA’s Juno spacecraft includes several of the planet’s southern jet streams. Juno is one of the missions currently in space that NASA would shut down under Trump’s budget request. Credit: NASA

One employee asked what NASA could do to secure more funding in the president’s budget request. But that ship has sailed. The options now available to NASA’s leadership are to support the budget proposal, stay silent, or leave. NASA is an executive agency and part of the Trump administration, and the White House’s budget request is NASA’s, too.

“It’s not our job to advocate, but let’s try to look at this in a positive way,” Petro said. “We’ve still got a lot of money. Let’s see how much mission we can do.”

Ultimately, it’s up to Congress to appropriate funding for NASA and other parts of the government. Lawmakers haven’t signaled where they might land on NASA’s budget, but Sen. Ted Cruz (R-Texas), who is influential on space-related matters, released the text of a proposed bill a few weeks ago that would restore funding for the International Space Station and forego cancellation of the Space Launch System rocket, among other things. But Cruz did not have much to say about adding more money for NASA’s science programs.

NASA’s senior leaders acknowledged on Wednesday that the pain of the agency’s downsizing will extend far beyond its walls.

“Eighty-five percent of our budget goes out the door to contractors,” Petro said. “So, with a reduced budget, absolutely, our contractors will also be impacted. In fact, they’re probably the bigger driver that will be impacted.”

It’s clearly a turbulent time for America’s space agency, and NASA employees have another month to decide if they want to be part of it.

“I know there’s a lot to consider,” Swails said. “There’s a lot that people are thinking about. I would encourage you to talk it out. Tap into your support systems. Talk to your spouse, your partner, your friend, your financial advisor, whomever you consider those trusted advisors for you.”

This sounds like hollow advice, but it seems like it’s all NASA’s workers can do. The Trump administration isn’t waiting for Congress to finalize the budget for 2026. The downsizing is here.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

During a town hall Wednesday, NASA officials on stage looked like hostages Read More »

the-axion-may-help-clean-up-the-messy-business-of-dark-matter

The axion may help clean up the messy business of dark matter


We haven’t found evidence of the theoretical particle, but it’s still worth investigating.

In recent years, a curious hypothetical particle called the axion, invented to address challenging problems with the strong nuclear force, has emerged as a leading candidate to explain dark matter. Although the potential for axions to explain dark matter has been around for decades, cosmologists have only recently begun to seriously search for them. Not only might they be able to resolve some issues with older hypotheses about dark matter, but they also offer a dizzying array of promising avenues for finding them.

But before digging into what the axion could be and why it’s so useful, we have to explore why the vast majority of physicists, astronomers, and cosmologists accept the evidence that dark matter exists and that it’s some new kind of particle. While it’s easy to dismiss the dark matter hypothesis as some sort of modern-day epicycle, the reality is much more complex (to be fair to epicycles, it was an excellent idea that fit the data extremely well for many centuries).

The short version is that nothing in the Universe adds up.

We have many methods available to measure the mass of large objects like galaxies and clusters. We also have various methods to assess the effects of matter in the Universe, like the details of the cosmic microwave background or the evolution of the cosmic web. There are two broad categories: methods that rely solely on estimating the amount of light-emitting matter and methods that estimate the total amount of matter, whether it’s visible or not.

For example, if you take a picture of a generic galaxy, you’ll see that most of the light-emitting matter is concentrated in the core. But when you measure the rotation rate of the galaxy and use that to estimate the total amount of matter, you get a much larger number, plus some hints that it doesn’t perfectly overlap with the light-emitting stuff. The same thing happens for clusters of galaxies—the dynamics of galaxies within a cluster suggest the presence of much more matter than what we can see, and the two types of matter don’t always align. When we use gravitational lensing to measure a cluster’s contents, we again see evidence for much more matter than is plainly visible.

The tiny variations in the cosmic microwave background tell us about the influence of both matter that interacts with light and matter that doesn’t. It clearly shows that some invisible component dominated the early Universe. When we look at the large-scale structure, invisible matter rules the day. Matter that doesn’t interact with light can form structures much more quickly than matter that gets tangled up by interacting with itself. Without invisible matter, galaxies like the Milky Way can’t form quickly enough to match observations of the early Universe.

The calculations of Big Bang nucleosynthesis, which correctly predict the abundances of hydrogen and helium in the Universe, put strict constraints on how much light-emitting matter there can be, and that number simply isn’t large enough to accommodate all these disparate results.

Across cosmic scales in time and space, the evidence just piles up: There’s more stuff out there than meets the eye, and it can’t simply be dim-but-otherwise-regular matter.

Weakness of WIMPs

Since pioneering astronomer Vera Rubin first revealed dark matter in a big way in the 1970s, the astronomical community has tried every idea it could think of to explain these observations. One tantalizing possibility is that the dark matter is the entirely wrong approach; instead, we’re misunderstanding gravity itself. But so far, half a century later, all attempts to modify gravity ultimately fail one observational test or another. In fact, the most popular modified gravity theory, known as MOND, still requires the existence of dark matter, just less of it.

As the evidence piled up for dark matter in the 1980s and ’90s, astronomers began to favor a particular explanation known as WIMPs, for weakly interacting massive particles. WIMPs weren’t just made up on the spot. They were motivated by particle physics and our attempts to create theories beyond the Standard Model. Many extensions to the Standard Model predicted the existence of WIMP-like particles that could be made in abundance in the early Universe, generating a population of heavy-ish particles that remained largely in the cosmic background.

WIMPs seemed like a good idea, as they could both explain the dark matter problem and bring us to a new understanding of fundamental physics. The idea is that we are swimming in an invisible sea of dark matter particles that almost always simply pass through us undetected. But every once in a while, a WIMP should interact via the weak nuclear force (hence the origin of its name) and give off a shower of byproducts. One problem: We needed to detect one of these rare interactions. So experiments sprang up around the world to catch an elusive dark matter candidate.

With amazing names like CRESST, SNOLAB, and XENON, these experiments have spent years searching for a WIMP to no avail. They’re not an outright failure, though; instead, with every passing year, we know more and more about what the WIMP can’t be—what mass ranges and interaction strengths are now excluded.

By now, that list of what the WIMP can’t be is rather long, and large regions within the space of possibilities are now hard-and-fast ruled out.

OK, that’s fine. I mean, it’s a huge bummer that our first best guess didn’t pan out, but nature is under no obligation to make this easy for us. Maybe the dark matter isn’t a WIMP at all.

More entities are sitting around the particle physics attic that we might be able to use to explain this deep cosmic mystery. And one of those hypothetical particles is called the axion.

Cleaning up with axions

It was the late 1970s, and physicist Frank Wilczek was shopping for laundry detergent. He found one brand standing out among the bottles: Axion. He thought that would make an excellent name for a particle.

He was right.

For decades, physicists had been troubled by a little detail of the theory used to explain the strong nuclear force, known as quantum chromodynamics. By all measurements, that force obeys charge-parity symmetry, which means if you take an interaction, flip all the charges around, and run it in a mirror, you’ll get the same result. But quantum chromodynamics doesn’t enforce that symmetry on its own.

It seemed to be a rather fine-tuned state of affairs, with the strong force unnaturally maintaining a symmetry when there was nothing in the theory to explain why.

In 1977, Roberto Peccei and Helen Quinn discovered an elegant solution. By introducing a new field into the Universe, it could naturally introduce charge-parity symmetry into the equations of quantum chromodynamics. The next year, Wilczek and Gerard ‘t Hooft independently realized that this new field would imply the existence of a particle.

The axion.

Dark matter was just coming on the cosmic scene. Axions weren’t invented to solve that problem, but physicists very quickly realized that the complex physics of the early Universe could absolutely flood the cosmos with axions. What’s more, they would largely ignore regular matter and sit quietly in the background. In other words, the axion was an excellent dark matter candidate.

But axions were pushed aside as the WIMPs hypothesis gained more steam. Back-of-the-envelope calculations showed that the natural mass range of the WIMP would precisely match the abundances needed to explain the amount of dark matter in the Universe, with no other fine-tuning or adjustments required.

Never ones to let the cosmologists get in the way of a good time, the particle physics community kept up interest in the axion, finding different variations on the particle and devising clever experiments to see if the axion existed. One experiment requires nothing more than a gigantic magnet since, in an extremely strong magnetic field, axions can spontaneously convert into photons.

To date, no hard evidence for the axion has shown up. But WIMPs have proven to be elusive, so cosmologists are showing more love to the axion and identifying surprising ways that it might be found.

A sloshy Universe

Axions are tiny, even for subatomic particles. The lightest known particle is the neutrino, which weighs no more than 0.086 electron-volts (or eV). Compare that to, say, the electron, which weighs over half a million eV. The exact mass of the axion isn’t known, and there are many models and versions of the particle, but it can have a mass all the way down to a trillionth of an eV… and even lower.

In fact, axions belong to a much broader class of “ultra-light” dark matter particle candidates, which can have masses down to 10^-24 eV. This is multiple billions of times lighter than the WIMPs—and indeed most of the particles of the Standard Model.

That means axions and their friends act nothing like most of the particles of the Standard Model.

First off, it may not even be appropriate to refer to them as particles. They have such little mass that their de Broglie wavelength—the size of the quantum wave associated with every particle—can stretch into macroscopic proportions. In some cases, this wavelength can be a few meters across. In others, it’s comparable to a star or a solar system. In still others, a single axion “particle” can stretch across an entire galaxy.

In this view, the individual axion particles would be subsumed into a larger quantum wave, like an ocean of dark matter so large and vast that it doesn’t make sense to talk about its individual components.

And because axions are bosons, they can synchronize their quantum wave nature, becoming a distinct state of matter: a Bose-Einstein condensate. In a Bose-Einstein condensate, most of the particles share the same low-energy state. When this happens, the de Broglie wavelength is larger than the average separation between the particles, and the waves of the individual particles all add up together, creating, in essence, a super-particle.

This way, we may get axion “stars”—clumps of axions acting as a single particle. Some of these axion stars may be a few thousand kilometers across, wandering across interstellar space. Still others may be the size of galactic cores, which might explain an issue with the traditional WIMP picture.

The best description of dark matter in general is that it is “cold,” meaning that the individual particles do not move fast compared to the speed of light. This allows them to gravitationally interact and form the seeds of structures like galaxies and clusters. But this process is a bit too efficient. According to simulations, cold dark matter tends to form more small, sub-galactic clumps than we observe, and it tends to make the cores of galaxies much, much denser than we see.

Axions, and ultra-light dark matter in general, can provide a solution here because they would operate in two modes. At large scales, they can act like regular cold dark matter. But inside galaxies, they can condense, forming tight clumps. Critically, these clumps have uniform densities within them. This smooths out the distribution of axions within galaxies, preventing the formation of smaller clumps and ultra-dense cores.

A messy affair

Over the decades, astronomers and physicists have found an astounding variety of ways that axions might reveal their presence in the Universe. Because of their curious ability to transmute into photons in the presence of strong magnetic fields, any place that features strong fields—think neutron stars or even the solar corona—could produce extra radiation due to axions. That makes them excellent hunting grounds for the particles.

Axion stars—also sometimes known provocatively as dark stars—would be all but invisible under most circumstances. That is, until they destabilize in a cascading chain reaction of axion-to-photon conversion and blow themselves up.

Even the light from distant galaxies could betray the existence of axions. If they exist in a dense swarm surrounding a galaxy, their conversion to photons will contribute to the galaxy’s light, creating a signal that the James Webb Space Telescope can pick up.

To date, despite all these ideas, there hasn’t been a single shred of solid evidence for the existence of axions, which naturally drops them down a peg or two on the credibility scale. But that doesn’t mean that axions aren’t worth investigating further. The experiments conducted so far only place limits on what properties they might have; there’s still plenty of room for viable axion and axion-like candidates, unlike their WIMPy cousins.

There’s definitely something funny going on with the Universe. The dark matter hypothesis—that there is a large, invisible component to matter in the Universe—isn’t that great of an idea, but it’s the best one we have that fits the widest amount of available evidence. For a while, we thought we knew what the identity of that matter might be, and we spent decades (and small fortunes) in that search.

But while WIMPs were the mainstay hypothesis, that didn’t snuff out alternative paths. Dozens of researchers have investigated modified forms of gravity to equal levels of unsuccessfulness. And a small cadre has kept the axion flame alive. It’s a good thing, too, since their obscure explorations of the corners of particle physics laid the groundwork to flesh out axions into a viable competitor to WIMPs.

No, we haven’t found any axions. And we still don’t know what the dark matter is. But it’s only by pushing forward—advancing new ideas, testing them against the reality of observations, and when they fail, trying again—will we come to a new understanding. Axions may or may not be dark matter; the best we can say is that they are promising. But who wouldn’t want to live in a Universe filled with dark stars, invisible Bose-Einstein condensates, and strange new particles?

Photo of Paul Sutter

The axion may help clean up the messy business of dark matter Read More »

discovery-of-hms-endeavour-wreck-confirmed

Discovery of HMS Endeavour wreck confirmed

By 2016, RIMAP’s volunteers, operating on grants and private donations, had located 10 of the 13 wrecks, almost exactly where historical charts said they should be. And the search had gotten a boost from the 1998 discovery of a 200-year-old paper trail linking the troop transport Lord Sandwich to its former life as HMS Endeavour.

Narrowing the field

One candidate was found just 500 meters off the coast of Rhode Island (designated RI 2394), 14 meters below the surface and buried in nearly 250 years’ worth of sediment and silt. RIMAP’s team concluded in 2018 that this was likely the wreck of the Endeavour, although the researchers emphasized that they needed to accumulate more evidence to support their conclusions. That’s because only about 15 percent of the ship survived. Any parts of the hull that weren’t quickly buried by silt have long since decomposed in the water.

The ANMN felt confident enough in its own research by 2022 to hold that controversial news conference announcing the discovery, against RIMAP’s objections. But the evidence is now strong enough for RIMAP to reach the same conclusion. “In 1999 and again in 2019, RIMAP and ANMM agreed on a set of criteria that, if satisfied, would permit identification of RI 2394 as Lord Sandwich,” the authors wrote in the report’s introduction. “Based on the agreed preponderance of evidence approach, enough of these criteria have now been met… to positively identify RI 2394 as the remnants of Lord Sandwich, formerly James Cook’s HM Bark Endeavour.

The Rhode Island Historical Preservation and Heritage Commission and the ANMM are now collaborating to ensure that the wreck site is protected in the future.

Discovery of HMS Endeavour wreck confirmed Read More »

researchers-get-viable-mice-by-editing-dna-from-two-sperm

Researchers get viable mice by editing DNA from two sperm


Altering chemical modifications of DNA lets the DNA from two sperm make a mouse.

For many species, producing an embryo is a bit of a contest between males and females. Males want as many offspring as possible and want the females to devote as many resources as possible to each of them. Females do better by keeping their options open and distributing resources in a way to maximize the number of offspring they can produce over the course of their lives.

In mammals, this plays out through the chemical modification of DNA, a process called imprinting. Males imprint their DNA by adding methyl modifications to it in a way that alters the activity of genes in order to promote the growth of embryos. Females do similar things chemically but focus on shutting down genes that promote embryonic growth. In a handful of key regions of the genome, having only the modifications specific to one sex is lethal, as the embryo can’t grow to match its stage of development.

One consequence of this is that you normally can’t produce embryos using only the DNA from eggs or from sperm. But over the last few years, researchers have gradually worked around the need for imprinted sites to have one copy from each parent. Now, in a very sophisticated demonstration, researchers have used targeted editing of methylation to produce mice from the DNA of two sperm.

Imprinting and same-sex parents

There’s a long history of studying imprinting in mice. Long before the genome was sequenced, people had identified specific parts of the chromosomes that, if deleted, were lethal—but only if inherited from one of the two sexes. They correctly inferred that this meant that the genes in the region are normally inactivated in the germ cells of one of the sexes. If they’re deleted in the other sex, then the combination that results in the offspring—missing on one chromosome, inactivated in the other—is lethal.

Over time, seven critical imprinted regions were identified, scattered throughout the genome. And, roughly 20 years ago, a team managed to find the right deletion to enable a female mouse to give birth to offspring that received a set of chromosomes from each of two unfertilized eggs. The researchers drew parallels to animals that can reproduce through parthenogenesis, where the female gives birth using unfertilized eggs. But the mouse example obviously took a big assist via the manipulation of egg cells in culture before being implanted in a mouse.

By 2016, researchers were specifically editing in deletions of imprinted genes in order to allow the creation of embryos by fusing stem cell lines that only had a single set of chromosomes. This was far more focused than the original experiment, as the deletions were smaller and affected only a few genes. By 2018, they had expanded the repertoire by figuring out how to get the genomes of two sperm together in an unfertilized egg with its own genome eliminated.

The products of two male parents, however, died the day after birth. This is either due to improperly compensating for imprinting or simply because the deletions had additional impacts on the embryo’s health. It took until earlier this year, when a very specific combination of 20 different gene edits and deletions enabled mice generated using the chromosomes from two sperm cells to survive to adulthood.

The problem with all of these efforts is that the deletions may have health impacts on the animals and may still cause problems if inherited from the opposite sex. So, while it’s an interesting way to confirm our understanding of the role of imprinting in reproduction, it’s not necessarily the route to using this as a reliable reproductive tool. Which finally brings us to the present research.

Roll your own imprinting

Left out of the above is the nature of the imprinting itself: How does a chunk of chromosome and all the genes on it get marked as coming from a male or female? The secret is to chemically modify that region of the DNA in a way that doesn’t alter base pairing, but does allow it to be recognized as distinct by proteins. The most common way of doing this is to link a single carbon atom (a methyl group) to the base cytosine. This tends to shut nearby genes down, and it can be inherited through cell division, since there are enzymes that recognize when one of the two DNA strands is unmodified and adds a methyl to it.

Methylation turns out to explain imprinting. The key regions for imprinting are methylated differently in males and females, which influences nearby gene activity and can be maintained throughout all of embryonic development.

So, to make up for the imprinting problems caused when both sets of chromosomes come from the same sex, what you need to do is a targeted reprogramming of methylation. And that’s what the researchers behind the new paper have done.

First, they needed to tell the two sets of chromosomes apart. To do that, they used two distantly related strains of mice, one standard lab strain that originated in Europe and a second that was caught in the wild in Thailand less than a century ago. These two strains have been separated for long enough that they have a lot of small differences in DNA sequences scattered throughout the genome. So, it was possible to use these to target one or the other of the genomes.

This was done using parts of the DNA editing systems that have been developed, the most famous of which is CRISPR/CAS. These systems have a protein that pairs with an RNA sequence to find a matching sequence in DNA. In this case, those RNAs could be made so that they target imprinting regions in just one of the two mouse strains. The protein/RNA combinations could also be linked to enzymes that modify DNA, either adding methyls or removing them.

To bring all this together, the researchers started with an egg and deleted the genome from it. They then injected the heads of sperm, one from the lab strain, one from the recently wild mouse. This left them with an egg with two sets of chromosomes, although a quarter of them would have two Y chromosomes and thus be inviable (unlike the Y, the X has essential genes). Arbitrarily, they chose one set of chromosomes to be female and targeted methylation and de-methylation enzymes to it in order to reprogram the pattern of methylation on it. Once that was done, they could allow the egg to start dividing and implant it into female mice.

Rare success

The researchers spent time ensuring that the enzymes they had were modifying the methylation as expected and that development started as usual. Their general finding is that the enzymes did change the methylation state for about 500 bases on either side of the targeted site and did so pretty consistently. But there are seven different imprinting sites that need to be modified, each of which controls multiple nearby genes. So, while the modifications were consistent, they weren’t always thorough enough to result in the expected changes to all of the nearby genes.

This limited efficiency showed up in the rate of survival. Starting with over 250 reprogrammed embryos that carried DNA from two males, they ended up with 16 pregnancies, but only four that died at birth, and three live ones; based on other experiments, most of the rest died during the second half of embryonic development. Of the three live ones, one was nearly 40 percent larger than the typical pup, suggesting problems regulating growth—it died the day after birth.

All three live births were male, although the numbers are small enough that it’s impossible to tell if that’s significant or not.

The researchers suggest several potential reasons for the low efficiency. One is simply that, while the probability of properly reprogramming at least one of the sites is high, reprogramming all seven is considerably more challenging. There’s also the risk of off-target effects, where the modification takes place in locations with similar sequences to the ones targeted. They also concede that there could be other key imprinted regions that we simply haven’t identified yet.

We would need to sort that out if we want to use this approach as a tool, which might be potentially useful as a way to breed mice that carry mutations that affect female viability or fertility. But this work has already been useful even in its inefficient state, because it serves as a pretty definitive validation of our ideas about the function of imprinting in embryonic development, as well as the critical role methylation plays in this process. If we weren’t largely right about both of those, the efficiency of this approach wouldn’t be low—it would be zero.

PNAS, 2025. DOI: 10.1073/pnas.2425307122  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers get viable mice by editing DNA from two sperm Read More »

sailing-the-fjords-like-the-vikings-yields-unexpected-insights

Sailing the fjords like the Vikings yields unexpected insights


“On we sweep with threshing oar”

Greer Jarrett has identified four possible small ports, or “havens,” used by Vikings along the Norwegian coast.

Experimental archaeologist Greer Jarrett of Lund University in Sweden has been sailing in the footsteps of Vikings for the last three years.

If you want to learn more about how and where the Vikings sailed, making the journey through the fjords yourself in replica boats is a practical, hands-on approach to achieving that end. Greer Jarrett, an archaeologist at Lund University in Sweden, has spent the last three years doing just that, sailing more than 5,000 kilometers along known Viking trade routes in open, spare-rigged clinker boats similar to those used by the Vikings.

Not only has Jarrett learned a great deal about the boats themselves, he also identified four possible havens along the Norwegian coast, part of what may have been a decentralized network that played a crucial role in trade and travel during that period. And those ports are located farther out to sea than other major ports and hubs known to date, according to a paper he published in the Journal of Archaeological Method and Theory.

It’s just the latest intriguing discovery enabled by the growing field of experimental archaeology, whereby researchers seek to reverse-engineer all manner of ancient technologies. Experimental archaeologists have, for instance, built their own versions of Early Upper Paleolithic adzes, axes, and chisels. The resulting fractures and wear enabled them to develop new criteria for identifying the likely functions of ancient tools. Others have tried to cook like the Neanderthals, concluding that flint flakes were surprisingly effective for butchering birds, and that roasting the birds damages the bones to such an extent that it’s unlikely they would be preserved in the archaeological record.

Kent State University’s Metin Eren has done practical experiments to study, for instance, the trajectories of atlatls attached to spears tipped with replica Clovis points, and how their performance compares to javelins used by Neanderthals. He even fashioned rudimentary blades out of his own frozen feces to test whether they could cut through pig hide, muscle, and tendon—solely to test a famous anthropological legend about an elderly Inuit man in the 1950s who purportedly did the same to kill and skin a dog, using its rib cage as a makeshift sled to venture off into the Arctic. (It did not work, so myth: busted. But it did snag Eren an Ig Nobel prize.)

Taking a hands-on, experimental archaeological approach to studying the Vikings makes sense in light of the dearth of contemporary written sources. “We have a few things written by outsiders, but there’s very, very few accounts written or delivered by people from Scandinavia during that period,” Jarrett told Ars. “We normally rely on indirect forms of evidence, be that genetics or archaeology or linguistics, which show strong, very frequent connections across maritime areas in the North Atlantic. But because traveling by boat is kind of an archaeologically invisible act, you don’t leave any footprints. So we have very little information about the voyages between these points.”

The sailing voyages made by Greer Jarrett during the research project. The image also shows the four possible Viking harbours identified by Jarrett.

The sailing voyages made by Greer Jarrett during the research project, as well as the four possible Viking harbors he identified. Credit: Greer Jarrett

Garrett and his crew used four or five different replica boats for their test voyages. Most were built by volunteers, enthusiasts, or students Jarrett had met during his considerable time in the field. They then sailed along the west coast of the Scandinavian Peninsula, a core area of Viking seafaring.

“These are reconstructions of traditional Norwegian boats from the 1800s and early 1900s,” said Jarrett. “My idea was, because of this really long-term continuity in traditional boat building practices, especially in Norway, it might be possible to use these later boats which have lots of similarities to try and work out the potentials of where people might have gotten out. It’s the idea of suggesting potentials based on practical experience to try and join those dots between the different evidence we have across the Viking world.”

That decision has led to some criticism from colleagues because of the enormous gap in time, but Jarrett defends his choice. “The Viking Age ends in the 11th century, and we’re talking about boats from 800 years later,” he said. “But the construction techniques and the way they are rigged and their general performance characteristics are similar enough. Because this is a project about voyages and not a project about boat building, it seemed like a defensible analogy.”

Seeking safe harbor

“On the long-range voyages, we worked in watches of four hours on and four hours off, and that is just about long enough to get some sleep on your off watch, but also just about short enough that you don’t get really, really, really cold, which is obviously a risk,” said Jarrett. “It was manageable, but we looked like penguins. I mean, we’re wearing six layers of wool at any time and sleeping all stacked together for warmth. But other times it’s really nice. The spring and the autumn in Scandinavia, there’s much more likelihood of high-pressure cycles, which means that it’s clearer and sunnier than in the summer itself.”

Nonetheless, there were some rough moments, such as when the mast spar holding up the mainsail snapped, forcing the crew to improvise and lash two oars together to hold the sail so they could continue their journey. It took several days to repair the boat so it could sail again. There was no safety boat following along in case the crew got into trouble, and no engine, although they did have a life raft, which the crew has yet to use.

Based on his sailing trials, Jarrett believes that the Vikings had no need for navigational tools like maps, a compass, or a sextant, relying instead on what he calls “mental maps”—or a “maritime cultural mindscape”—based on sailors’ memories and experiences passed down orally through generations. Those maps might also be informed by the myths linked to well-known coastal landmarks, such as skerries, small islets, or reefs.

“People had been moving by boat along the west coast of Scandinavia for a really, really, really long time, probably since the late Neolithic, if not earlier—thousands of years before the Viking age,” said Jarrett. “There are big trading networks in place beforehand, and that is reflected in the names, place names along the west coast. My primary argument is if you spend 3,000 years traveling up and down a coastline in which you can use the coast at all times for navigation, then it’s unnecessary to develop instrumentation.”

“Instruments are used when you are in a place out in the open sea that you don’t know,” Jarrett continued. “We definitely know they didn’t have compasses because those don’t arrive from China until the 1200s. There are these ideas about sunstones and sundials, or little sun compasses, which are entirely possible. But there’s no legitimate proof of either of them archaeologically yet. I may well be proved wrong if we find them at some point, but I don’t think they’re necessary for this at all.”

Based on the sailing trials, archaeological and documentary evidence of Viking Age maritime centers, and digital reconstructions of past sea levels. Jarrett was able develop a useful set of criteria for evaluating potential havens. For instance, the site should be reachable in low visibility, with land or sea marks that sailors could use as bearings; large enough to accommodate multiple vessels of at least the size of a fyring (which can house a crew of four to 10 people); provide good protection from sea swell and storm surges; and have access to fresh water, among other criteria. Four sites scored sufficiently high by those criteria to qualify as possible Viking havens.

The four sites are Smørhamn, located at the confluence of Oldersund and the Frøysjø, where an inn and trading post are known to have existed since at least the late 17th century; the archipelago of Sørøyane between Stad and Ålesund, near where the sea battle of Hjörungavágr was fought circa 986 CE; Bjørnsund, a number of small islands off the southwestern tip of Hustadvika; and the island of Storfosna, which appears on 16th and 17th century charts.

“I’m not saying, ‘This is where they went,'” said Jarrett. “I’m saying that, with these kinds of boats under these conditions, it would be possible to go to these places. And it’s much more difficult—not impossible, but much more difficult—to go to these other places or to sail in these other conditions.”

Pining for the fjords

The next step is for Jarrett and other archaeologists to hunt for evidence in support of his hypothesis. “Most of these sites have never been excavated,” said Jarrett. “There’s been a long assumption that these are landing places with the idea that you are dragging your boat ashore. I’m very opposed to that idea because these are two-and-a-half-ton boats, let alone the cargo. Unless you have a team of oxen and 20 people at your command, there is no way you’re getting them on the beach. I’m very convinced that these places have jetties and mooring posts likely preserved underwater. All of that organic material survives much better underwater than it does on land. So I think that’s very possible.”

They might also find smaller items suggestive of a thriving harbor community. “Whenever you go into land, you’ve got something that’s broken, so you need to do repairs,” said Jarrett. “So things like clink nails or piles of balustones or signs of smithing—the typical kind of things you’d use for repairing your ship, I think are possible to find.” Jarrett’s methodology might also prove useful for studying other seafaring communities. 

The practical experience of sailing the same seas as the Vikings naturally led to some surprising insights. “You are able to ask very different questions the minute you walk away from your desk and get on a boat,” said Jarrett. “I think it’s essential to do that because you think in new ways. In terms of the results themselves, the boats are extremely seaworthy crafts. When you get in them for the first time, you don’t think that, because they’re very, very light. They feel very flimsy, and they’re very low in the water compared to a modern sailing boat. So you feel really in touch with the wave, which is kind of scary. But because they’re so flexible and because of the way they’re rigged, they’re actually really stable, even in big waves.”

“We kept going out thinking, ‘Oh, this is maybe the limit of what this boat can tolerate,’ and then it would be fine, and we’d be, ‘Okay, let’s go a little bit in slightly bigger waves with slightly stronger wind,'” Jarrett continued. “So I think our comfort zones definitely visibly expanded during that period. And I had the chance to work with the same crews over three years. By the end of those three years, we were doing stuff that we would never have been able to do at the beginning.”

Another big difference from modern boats, Jarrett discovered, is that one cannot sail a traditional Viking craft alone. “It has to be a collaborative effort because of how you need a person at the front and the back of the boat basically at all times,” he said. “So developing the crew together and gaining not only skills, but also trust between us meant that we could do things in 2024 that seemed completely insane just a couple of years earlier. I cannot imagine what that is like if you have an entire lifetime of Viking sailors working together for 30 years. It must be an incredible way of creating social bonds.”

DOI: Journal of Archaeological Method and Theory, 2025. 10.1007/s10816-025-09708-6  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Sailing the fjords like the Vikings yields unexpected insights Read More »

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

psyche-keeps-its-date-with-an-asteroid,-but-now-it’s-running-in-backup-mode

Psyche keeps its date with an asteroid, but now it’s running in backup mode

The spacecraft, built by Maxar Space Systems, will operate its electric thrusters for the equivalent of three months between now and November to keep the mission on track for arrival at asteroid Psyche in 2029.

“Through comprehensive testing and analysis, the team narrowed down the potential causes to a valve that may have malfunctioned in the primary line,” NASA said in a statement Friday. “The switch to the identical backup propellant line in late May restored full functionality to the propulsion system.”

The next waypoint on Psyche’s voyage will be a flyby of Mars in May 2026. Officials expect Psyche to keep that date, which is critical for using Mars’ gravity to slingshot the spacecraft deeper into the Solar System, eventually reaching the asteroid belt about four years from now.

NASA’s Psyche spacecraft takes a spiral path to the asteroid Psyche, as depicted in this graphic that shows the path from above the plane of the planets, labeled with key milestones of the prime mission. Credit: NASA/JPL-Caltech

At Psyche, the spacecraft will enter orbit and progressively move closer to the asteroid, using a suite of sensors to map its surface, measure its shape, mass, and gravity field, and determine its elemental composition. Observations through telescopes suggest Psyche is roughly 140 miles (226 kilometers) in diameter, or about the width of Massachusetts. But it’s likely not spherical in shape. Scientists describe its shape as more akin to a potato.

Potatoes come in lots of shapes, and researchers won’t know exactly what Psyche looks like until NASA’s asteroid explorer arrives in 2029. Psyche will be the first metallic, or M-type, asteroid visited by any spacecraft, and scientists are eager to study an object that’s largely made of metals—probably iron, nickel, and perhaps some rarer elements instead of rocky minerals.

With the Psyche spacecraft’s plasma thrusters back in action, these goals of NASA’s billion-dollar science mission remain achievable.

“The mission team’s dedication and systematic approach to this investigation exemplifies the best of NASA engineering,” said Bob Mase, Psyche project manager at  JPL, in a statement. “Their thorough diagnosis and recovery, using the backup system, demonstrates the value of robust spacecraft design and exceptional teamwork.”

But there’s still a lingering concern whatever problem caused the valve to malfunction in the primary fuel line might also eventually affect the same kind of valve in the backup line.

“We are doing a lot of good proactive work around that possible issue,” wrote Lindy Elkins-Tanton, Psyche’s principal investigator at Arizona State University, in a post on X.

Psyche keeps its date with an asteroid, but now it’s running in backup mode Read More »