Science

termite-farmers-fine-tune-their-weed-control

Termite farmers fine-tune their weed control

Odontotermes obesus is one of the termite species that grows fungi, called Termitomyces, in their mounds. Workers collect dead leaves, wood, and grass to stack them in underground fungus gardens called combs. There, the fungi break down the tough plant fibers, making them accessible for the termites in an elaborate form of symbiotic agriculture.

Like any other agriculturalist, however, the termites face a challenge: weeds. “There have been numerous studies suggesting the termites must have some kind of fixed response—that they always do the same exact thing when they detect weed infestation,” says Rhitoban Raychoudhury, a professor of biological sciences at the Indian Institute of Science Education, “but that was not the case.” In a new Science study, Raychoudhury’s team discovered that termites have pretty advanced, surprisingly human-like gardening practices.

Going blind

Termites do not look like particularly good gardeners at first glance. They are effectively blind, which is not that surprising considering they spend most of their life in complete darkness working in endless corridors of their mounds. But termites make up for their lack of sight with other senses. “They can detect the environment based on advanced olfactory reception and touch, and I think this is what they use to identify the weeds in their gardens,” Raychoudhury says. To learn how termites react once they detect a weed infestation, his team collected some Odontotermes obesus and challenged them with different gardening problems.

The experimental setup was quite simple. The team placed some autoclaved soil sourced from termite mounds into glass Petri dishes. On this soil, Raychoudhury and his colleagues placed two fungus combs in each dish. The first piece acted as a control and was a fresh, uninfected comb with Termitomyces. “Besides acting as a control, it was also there to make sure the termites have the food because it is very hard for them to survive outside their mounds,” Raychoudhury explains. The second piece was intentionally contaminated with Pseudoxylaria, a filamentous fungal weed that often takes over Termitomyces habitats in termite colonies.

Termite farmers fine-tune their weed control Read More »

putin-oks-plan-to-turn-russian-spacecraft-into-flying-billboards

Putin OKs plan to turn Russian spacecraft into flying billboards

These are tough times for Russia’s civilian space program. In the last few years, Russia has cut back on the number of Soyuz crew missions it is sending to the International Space Station, and a replacement for the nearly 60-year-old Soyuz spacecraft remains elusive.

While the United States and China are launching more space missions than ever before, Russia’s once-dominant launch cadence is on a downhill slide.

Russia’s access to global markets dried up after Russian President Vladimir Putin launched the country’s invasion of Ukraine in February 2022. The fallout from the invasion killed several key space partnership between Russia and Europe. Russia’s capacity to do new things in space seems to be focused on military programs like anti-satellite weapons.

The Roscosmos State Corporation for Space Activities, Russia’s official space agency, may have a plan to offset the decline. Late last month, Putin approved changes to federal laws governing advertising and space activities to “allow for the placement of advertising on spacecraft,” Roscosmos posted on its official Telegram account.

We’ve seen this before

The Russian State Duma, dominated by Putin loyalists, previously approved the amendments.

“According to the amendments, Roscosmos has been granted the right, effective January 1, 2026, to place advertising on space objects owned by both the State Corporation itself and federally,” Roscosmos said. “The amendments will create a mechanism for attracting private investment in Russian space exploration and reduce the burden on the state budget.”

The law requires that advertising symbols not affect spacecraft safety. The Russian government said it will establish a fee structure for advertising on federally owned space objects.

Roscosmos didn’t say this, but advertisers eligible for the offer will presumably be limited to Russia and its allies. Any ads from the West would likely violate sanctions.

Rocket-makers have routinely applied decals, stickers, and special paint jobs to their vehicles. This is a particularly popular practice in Russia. Usually, these logos represent customers and suppliers. Sometimes they honor special occasions, like the 60th anniversary of the first human spaceflight mission by Soviet cosmonaut Yuri Gagarin and the 80th anniversary of the end of World War II.

Putin OKs plan to turn Russian spacecraft into flying billboards Read More »

rocket-report:-bezos’-firm-will-package-satellites-for-launch;-starship-on-deck

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck


The long, winding road for Franklin Chang-Diaz’s plasma rocket engine takes another turn.

Blue Origin’s second New Glenn booster left its factory this week for a road trip to the company’s launch pad a few miles away. Credit: Blue Origin

Welcome to Edition 8.14 of the Rocket Report! We’re now more than a week into a federal government shutdown, but there’s been little effect on the space industry. Military space operations are continuing unabated, and NASA continues preparations at Kennedy Space Center, Florida, for the launch of the Artemis II mission around the Moon early next year. The International Space Station is still flying with a crew of seven in low-Earth orbit, and NASA’s fleet of spacecraft exploring the cosmos remain active. What’s more, so much of what the nation does in space is now done by commercial companies largely (but not completely) immune from the pitfalls of politics. But the effect of the shutdown on troops and federal employees shouldn’t be overlooked. They will soon miss their first paychecks unless political leaders reach an agreement to end the stalemate.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Danger from dead rockets. A new listing of the 50 most concerning pieces of space debris in low-Earth orbit is dominated by relics more than a quarter-century old, primarily dead rockets left to hurtle through space at the end of their missions, Ars reports. “The things left before 2000 are still the majority of the problem,” said Darren McKnight, lead author of a paper presented October 3 at the International Astronautical Congress in Sydney. “Seventy-six percent of the objects in the top 50 were deposited last century, and 88 percent of the objects are rocket bodies. That’s important to note, especially with some disturbing trends right now.”

Littering in LEO … The disturbing trends mainly revolve around China’s actions in low-Earth orbit. “The bad news is, since January 1, 2024, we’ve had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years,” McKnight told Ars. China is responsible for leaving behind 21 of those 26 rockets. Overall, Russia and the Soviet Union lead the pack with 34 objects listed in McKnight’s Top 50, followed by China with 10, the United States with three, Europe with two, and Japan with one. Russia’s SL-16 and SL-8 rockets are the worst offenders, combining to take 30 of the Top 50 slots. An impact with even a modestly sized object at orbital velocity would create countless pieces of debris, potentially triggering a cascading series of additional collisions clogging LEO with more and more space junk, a scenario called the Kessler Syndrome.

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

New Shepard flies again. Blue Origin, Jeff Bezos’ space company, launched its sixth crewed New Shepard flight so far this year Wednesday as the company works to increase the vehicle’s flight rate, Space News reports. This was the 36th flight of Blue Origin’s suborbital New Shepard rocket. The passengers included: Jeff Elgin, Danna Karagussova, Clint Kelly III, Will Lewis, Aaron Newman, and Vitalii Ostrovsky. Blue Origin said it has now flown 86 humans (80 individuals) into space. The New Shepard booster returned to a pinpoint propulsive landing, and the capsule parachuted into the desert a few miles from the launch site near Van Horn, Texas.

Two-month turnaround … This flight continued Blue Origin’s trend of launching New Shepard about once per month. The company has two capsules and two boosters in its active inventory, and each vehicle has flown about once every two months this year. Blue Origin currently has command of the space tourism and suborbital research market as its main competitor in this sector, Virgin Galactic, remains grounded while it builds a next-generation rocket plane. (submitted by EllPeaTea)

NASA still interested in former astronaut’s rocket engine. NASA has awarded the Ad Astra Rocket Company a $4 million, two-year contract for the continued development of the company’s Variable Specific Impulse Magnetoplasma Rocket (VASIMR) concept, Aviation Week & Space Technology reports. Ad Astra, founded by former NASA astronaut Franklin Chang-Diaz, claims the vehicle has the potential to reach Mars with human explorers within 45 days using a nuclear power source rather than solar power. The new contract will enable federal funding to support development of the engine’s radio frequency, superconducting magnet, and structural exoskeleton subsystems.

Slow going … Houston-based Ad Astra said in a press release that it sees the high-power plasma engine as “nearing flight readiness.” We’ve heard this before. The VASIMR engine has been in development for decades now, beset by a lack of stable funding and the technical hurdles inherent in designing and testing such demanding technology. For example, Ad Astra once planned a critical 100-hour, 100-kilowatt ground test of the VASIMR engine in 2018. The test still hasn’t happened. Engineers discovered a core component of the engine tended to overheat as power levels approached 100 kilowatts, forcing a redesign that set the program back by at least several years. Now, Ad Astra says it is ready to build and test a pair of 150-kilowatt engines, one of which is intended to fly in space at the end of the decade.

Gilmour eyes return to flight next year. Australian rocket and satellite startup Gilmour Space Technologies is looking to return to the launch pad next year after the first attempt at an orbital flight failed over the summer, Aviation Week & Space Technology reports. “We are well capitalized. We are going to be launching again next year,” Adam Gilmour, the company’s CEO, said October 3 at the International Astronautical Congress in Sydney.

What happened? … Gilmour didn’t provide many details about the cause of the launch failure in July, other than to say it appeared to be something the company didn’t test for ahead of the flight. The Eris rocket flew for 14 seconds, losing control and crashing a short distance from the launch pad in the Australian state of Queensland. If there’s any silver lining, Gilmour said the failure didn’t damage the launch pad, and the rocket’s use of a novel hybrid propulsion system limited the destructive power of the blast when it struck the ground.

Stoke Space’s impressive funding haul. Stoke Space announced a significant capital raise on Wednesday, a total of $510 million as part of Series D funding. The new financing doubles the total capital raised by Stoke Space, founded in 2020, to $990 million, Ars reports. The infusion of money will provide the company with “the runway to complete development” of the Nova rocket and demonstrate its capability through its first flights, said Andy Lapsa, the company’s co-founder and chief executive, in a news release characterizing the new funding.

A futuristic design … Stoke is working toward a 2026 launch of the medium-lift Nova rocket. The rocket’s innovative design is intended to be fully reusable from the payload fairing on down, with a regeneratively cooled heat shield on the vehicle’s second stage. In fully reusable mode, Nova will have a payload capacity of 3 metric tons to low-Earth orbit, and up to 7 tons in fully expendable mode. Stoke is building a launch pad for the Nova rocket at Cape Canaveral Space Force Station, Florida.

SpaceX took an unusual break from launching. SpaceX launched its first Falcon 9 rocket from Florida in 12 days during the predawn hours of Tuesday morning, Spaceflight Now reports. The launch gap was highlighted by a run of persistent, daily storms in Central Florida and over the Atlantic Ocean, including hurricanes that prevented deployment of SpaceX’s drone ships to support booster landings. The break ended with the launch of 28 more Starlink broadband satellites. SpaceX launched three Starlink missions in the interim from Vandenberg Space Force Base, California.

Weather still an issue … Weather conditions on Florida’s Space Coast are often volatile, particularly in the evenings during summer and early autumn. SpaceX’s next launch from Florida was supposed to take off Thursday evening, but officials pushed it back to no earlier than Saturday due to a poor weather forecast over the next two days. Weather still gets a vote in determining whether a rocket lifts off or doesn’t, despite SpaceX’s advancements in launch efficiency and the Space Force’s improved weather monitoring capabilities at Cape Canaveral.

ArianeGroup chief departs for train maker. Current ArianeGroup CEO Martin Sion has been named the new head of French train maker Alstom. He will officially take up the role in April 2026, European Spaceflight reports. Sion assumed the role as ArianeGroup’s chief executive in 2023, replacing the former CEO who left the company after delays in the debut of its main product: the Ariane 6 rocket. Sion’s appointment was announced by Alstom, but ArianeGroup has not made any official statement on the matter.

Under pressure … The change in ArianeGroup’s leadership comes as the company ramps up production and increases the launch cadence of the Ariane 6 rocket, which has now flown three times, with a fourth launch due next month. ArianeGroup’s subsidiary, Arianespace, seeks to increase the Ariane 6’s launch cadence to 10 missions per year by 2029. ArianeGroup and its suppliers will need to drastically improve factory throughput to reach this goal.

New Glenn emerges from factory. Blue Origin rolled the first stage of its massive New Glenn rocket from its hangar on Wednesday morning in Florida, kicking off the final phase of the campaign to launch the heavy-lift vehicle for the second time, Ars reports. In sharing video of the rollout to Launch Complex-36 on Wednesday online, the space company did not provide a launch target for the mission, which seeks to put two small Mars-bound payloads into orbit. The pair of identical spacecraft to study the solar wind at Mars is known as ESCAPADE. However, sources told Ars that on the current timeline, Blue Origin is targeting a launch window of November 9 to November 11. This assumes pre-launch activities, including a static-fire test of the first stage, go well.

Recovery or bust? Blue Origin has a lot riding on this booster, named “Never Tell Me The Odds,” which it will seek to recover and reuse. Despite the name of the booster, the company is quietly confident that it will successfully land the first stage on a drone ship named Jacklyn. Internally, engineers at Blue Origin believe there is about a 75 percent chance of success. The first booster malfunctioned before landing on the inaugural New Glenn test flight in January. Company officials are betting big on recovering the booster this time, with plans to reuse it early next year to launch Blue’s first lunar lander to the Moon.

SpaceX gets bulk of this year’s military launch orders. Around this time each year, the US Space Force convenes a Mission Assignment Board to dole out contracts to launch the nation’s most critical national security satellites. The military announced this year’s launch orders Friday, and SpaceX was the big winner, Ars reports. Space Systems Command, the unit responsible for awarding military launch contracts, selected SpaceX to launch five of the seven missions up for assignment this year. United Launch Alliance (ULA), a 50-50 joint venture between Boeing and Lockheed Martin, won contracts for the other two. These missions for the Space Force and the National Reconnaissance Office are still at least a couple of years away from flying.

Vulcan getting more expensive A closer examination of this year’s National Security Space Launch contracts reveals some interesting things. The Space Force is paying SpaceX $714 million for the five launches awarded Friday, for an average of roughly $143 million per mission. ULA will receive $428 million for two missions, or $214 million for each launch. That’s about 50 percent more expensive than SpaceX’s price per mission. This is in line with the prices the Space Force paid SpaceX and ULA for last year’s contracts. However, look back a little further and you’ll find ULA’s prices for military launches have, for some reason, increased significantly over the last few years. In late 2023, the Space Force awarded a $1.3 billion deal to ULA for a batch of 11 launches at an average cost per mission of $119 million. A few months earlier, Space Systems Command assigned six launches to ULA for $672 million, or $112 million per mission.

Starship Flight 11 nears launch. SpaceX rolled the Super Heavy booster for the next test flight of the company’s Starship mega-rocket out to the launch pad in Texas this week. The booster stage, with 33 methane-fueled engines, will power the Starship into the upper atmosphere during the first few minutes of flight. This booster is flight-proven, having previously launched and landed on a test flight in March.

Next steps With the Super Heavy booster installed on the pad, the next step for SpaceX will be the rollout of the Starship upper stage. That is expected to happen in the coming days. Ground crews will raise Starship atop the Super Heavy booster to fully stack the rocket to its total height of more than 400 feet (120 meters). If everything goes well, SpaceX is targeting liftoff of the 11th full-scale test flight of Starship and Super Heavy as soon as Monday evening. (submitted by EllPeaTea)

Blue Origin takes on a new line of business. Blue Origin won a US Space Force competition to build a new payload processing facility at Cape Canaveral Space Force Station, Florida, Spaceflight Now reports. Under the terms of the $78.2 million contract, Blue Origin will build a new facility capable of handling payloads for up to 16 missions per year. The Space Force expects to use about half of that capacity, with the rest available to NASA or Blue Origin’s commercial customers. This contract award follows a $77.5 million agreement the Space Force signed with Astrotech earlier this year to expand the footprint of its payload processing facility at Vandenberg Space Force Base, California.

Important stuff … Ground infrastructure often doesn’t get the same level of attention as rockets, but the Space Force has identified bottlenecks in payload processing as potential constraints on ramping up launch cadences at the government’s spaceports in Florida and California. Currently, there are only a handful of payload processing facilities in the Cape Canaveral area, and most of them are only open to a single user, such as SpaceX, Amazon, the National Reconnaissance Office, or NASA. So, what exactly is payload processing? The Space Force said Blue Origin’s new facility will include space for “several pre-launch preparatory activities” that include charging batteries, fueling satellites, loading other gaseous and fluid commodities, and encapsulation. To accomplish those tasks, Blue Origin will create “a clean, secure, specialized high-bay facility capable of handling flight hardware, toxic fuels, and explosive materials.”

Next three launches

Oct. 11: Gravity 1 | Unknown Payload | Haiyang Spaceport, China Coastal Waters | 02: 15 UTC

Oct. 12: Falcon 9 | Project Kuiper KF-03 | Cape Canaveral Space Force Station, Florida | 00: 41 UTC

Oct. 13: Starship/Super Heavy | Flight 11 | Starbase, Texas | 23: 15 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: Bezos’ firm will package satellites for launch; Starship on deck Read More »

“like-putting-on-glasses-for-the-first-time”—how-ai-improves-earthquake-detection

“Like putting on glasses for the first time”—how AI improves earthquake detection


AI is “comically good” at detecting small earthquakes—here’s why that matters.

Credit: Aurich Lawson | Getty Images

On January 1, 2008, at 1: 59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes. What used to be the task of human analysts—and later, simpler computer programs—can now be done automatically and quickly by machine-learning tools.

These machine-learning tools can detect smaller earthquakes than human analysts, especially in noisy environments like cities. Earthquakes give valuable information about the composition of the Earth and what hazards might occur in the future.

“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights newsletter.

I talked with several earthquake scientists, and they all agreed that machine-learning methods have replaced humans for the better in these specific tasks.

“It’s really remarkable,” Judith Hubbard, a Cornell University professor and Bradley’s co-author, told me.

Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”

When an earthquake happens in one place, the shaking passes through the ground, similar to how sound waves pass through the air. In both cases, it’s possible to draw inferences about the materials the waves pass through.

Imagine tapping a wall to figure out if it’s hollow. Because a solid wall vibrates differently than a hollow wall, you can figure out the structure by sound.

With earthquakes, this same principle holds. Seismic waves pass through different materials (rock, oil, magma, etc.) differently, and scientists use these vibrations to image the Earth’s interior.

The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.

An old-fashioned physical seismometer. Today, seismometers record data digitally. Credit: Yamaguchi先生 on Wikimedia CC BY-SA 3.0

Scientists then process raw seismometer information to identify earthquakes.

Earthquakes produce multiple types of shaking, which travel at different speeds. Two types, Primary (P) waves and Secondary (S) waves are particularly important, and scientists like to identify the start of each of these phases.

Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”

However, there are only so many earthquakes you can find and classify manually. Creating algorithms to effectively find and process earthquakes has long been a priority in the field—especially since the arrival of computers in the early 1950s.

“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.

Composite seismogram of common events. Note how each event has a slightly different shape. Credit: EarthScope Consortium CC BY 4.0

As we see in the seismogram above, many different events can cause seismic signals. If a method is too sensitive, it risks falsely detecting events as earthquakes. The problem is especially bad in cities, where the constant hum of traffic and buildings can drown out small earthquakes.

However, earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known, including the earthquake at the start of this story. Almost all of the new 1.6 million quakes they found were very small, magnitude 1 and below.

If you don’t have an extensive pre-existing dataset of templates, however, you can’t easily apply template matching. That isn’t a problem in Southern California—which already had a basically complete record of earthquakes down to magnitude 1.7—but it’s a challenge elsewhere.

Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.

There had to be a better way.

AI detection models solve all of these problems:

  • They are faster than template matching.

  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.

  • AI models generalize well to regions not represented in the original dataset.

As an added bonus, AI models can give better information about when the different types of earthquake shaking arrive. Timing the arrivals of the two most important waves—P and S waves—is called phase picking. It allows scientists to draw inferences about the structure of the quake. AI models can do this alongside earthquake detection.

The basic task of earthquake detection (and phase picking) looks like this:

Cropped figure from Earthquake Transformer—an attentive deep-learning model for simultaneous earthquake detection and phase picking. Credit: Nature Communications

The first three rows represent different directions of vibration (east–west, north–south, and up–down respectively). Given these three dimensions of vibration, can we determine if an earthquake occurred, and if so, when it started?

We want to detect the initial P wave, which arrives directly from the site of the earthquake. But this can be tricky because echoes of the P wave may get reflected off other rock layers and arrive later, making the waveform more complicated.

Ideally, then, our model outputs three things at every time step in the sample:

  1. The probability that an earthquake is occurring at that moment.

  2. The probability that the first P wave arrives at that moment.

  3. The probability that the first S wave arrives at that moment.

We see all three outputs in the fourth row: the detection in green, the P wave arrival in blue, and the S wave arrival in red. (There are two earthquakes in this sample.)

To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training. I’ll describe one of the most used models: Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.

AlexNet used convolutions, a neural network architecture that’s based on the idea that pixels that are physically close together are more likely to be related. The first convolutional layer of AlexNet broke an image down into small chunks—11 pixels on a side—and classified each chunk based on the presence of simple features like edges or gradients.

The next layer took the first layer’s classifications as input and checked for higher-level concepts such as textures or simple shapes.

Each convolutional layer analyzed a larger portion of the image and operated at a higher level of abstraction. By the final layers, the network was looking at the entire image and identifying objects like “mushroom” and “container ship.”

Images are two-dimensional, so AlexNet is based on two-dimensional convolutions. By contrast, seismograph data is one-dimensional, so Earthquake Transformer uses one-dimensional convolutions over the time dimension. The first layer analyzes vibration data in 0.1-second chunks, while later layers identify patterns over progressively longer time periods.

It’s difficult to say what exact patterns the earthquake model is picking out, but we can analogize this to a hypothetical audio transcription model using one-dimensional convolutions. That model might first identify consonants, then syllables, then words, then sentences over increasing time scales.

Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection. Earthquake seismograms have a general structure: P waves followed by S waves followed by other types of shaking. So if a segment looks like the start of a P wave, the attention mechanism helps it check that it fits into a broader earthquake pattern.

All of the Earthquake Transformer’s components are standard designs from the neural network literature. Other successful detection models, like PhaseNet, are even simpler. PhaseNet uses only one-dimensional convolutions to pick the arrival times of earthquake waves. There are no attention layers.

Generally, there hasn’t been “much need to invent new architectures for seismology,” according to Byrnes. The techniques derived from image processing have been sufficient.

What made these generic architectures work so well then? Data. Lots of it.

Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.

All recorded earthquakes in the Stanford Earthquake Dataset. Credit: IEEE (CC BY 4.0)

The combination of the data and the architecture just works. The current models are “comically good” at identifying and classifying earthquakes, according to Byrnes. Typically, machine-learning methods find 10 or more times the quakes that were previously identified in an area. You can see this directly in an Italian earthquake catalog:

From Machine learning and earthquake forecasting—next steps by Beroza et al. Credit: Nature Communications (CC-BY 4.0)

AI tools won’t necessarily detect more earthquakes than template matching. But AI-based techniques are much less compute- and labor-intensive, making them more accessible to the average research project and easier to apply in regions around the world.

All in all, these machine-learning models are so good that they’ve almost completely supplanted traditional methods for detecting and phase-picking earthquakes, especially for smaller magnitudes.

The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.

The applications are more technical and less flashy, said Cornell’s Judith Hubbard.

Better AI models have given seismologists much more comprehensive earthquake catalogs, which have unlocked “a lot of different techniques,” Bradley said.

One of the coolest applications is in understanding and imaging volcanoes. Volcanic activity produces a large number of small earthquakes, whose locations help scientists understand the structure of the magma system. In a 2022 paper, John Wilding and co-authors used a large AI-generated earthquake catalog to create this incredible image of the structure of the Hawaiian volcanic system.

Each dot represents an individual earthquake. Credit: Wilding et al., The magmatic web beneath Hawai‘i.

They provided direct evidence of a previously hypothesized magma connection between the deep Pāhala sill complex and Mauna Loa’s shallow volcanic structure. You can see this in the image with the arrow labeled as Pāhala-Mauna Loa seismicity band. The authors were also able to clarify the structure of the Pāhala sill complex into discrete sheets of magma. This level of detail could potentially facilitate better real-time monitoring of earthquakes and more accurate eruption forecasting.

Another promising area is lowering the cost of dealing with huge datasets. Distributed Acoustic Sensing (DAS) is a powerful technique that uses fiber-optic cables to measure seismic activity across the entire length of the cable. A single DAS array can produce “hundreds of gigabytes of data” a day, according to Jiaxuan Li, a professor at the University of Houston. That much data can produce extremely high-resolution datasets—enough to pick out individual footsteps.

AI tools make it possible to very accurately time earthquakes in DAS data. Before the introduction of AI techniques for phase picking in DAS data, Li and some of his collaborators attempted to use traditional techniques. While these “work roughly,” they weren’t accurate enough for their downstream analysis. Without AI, much of his work would have been “much harder,” he told me.

Li is also optimistic that AI tools will be able to help him isolate “new types of signals” in the rich DAS data in the future.

Not all AI techniques have paid off

As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”

They pointed out that graduate students can feel pressure to specialize in AI methods at the cost of learning less about the fundamentals of the scientific field. They fear that if this type of AI-driven research becomes entrenched, older methods will get “out-competed by a kind of meaninglessness.”

While these are real issues, and ones Understanding AI has reported on before, I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell Fellowship. Subscribe to Understanding AI to get more from Tim and Kai.

“Like putting on glasses for the first time”—how AI improves earthquake detection Read More »

we’re-about-to-find-many-more-interstellar-interlopers—here’s-how-to-visit-one

We’re about to find many more interstellar interlopers—here’s how to visit one


“You don’t have to claim that they’re aliens to make these exciting.”

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

The Hubble Space Telescope captured this image of the interstellar comet 3I/ATLAS on July 21, when the comet was 277 million miles from Earth. Hubble shows that the comet has a teardrop-shaped cocoon of dust coming off its solid, icy nucleus. Credit: NASA, ESA, David Jewitt (UCLA); Image Processing: Joseph DePasquale (STScI)

A few days ago, an inscrutable interstellar interloper made its closest approach to Mars, where a fleet of international spacecraft seek to unravel the red planet’s ancient mysteries.

Several of the probes encircling Mars took a break from their usual activities and turned their cameras toward space to catch a glimpse of an object named 3I/ATLAS, a rogue comet that arrived in our Solar System from interstellar space and is now barreling toward perihelion—its closest approach to the Sun—at the end of this month.

This is the third interstellar object astronomers have detected within our Solar System, following 1I/ʻOumuamua and 2I/Borisov discovered in 2017 and 2019. Scientists think interstellar objects routinely transit among the planets, but telescopes have only recently had the ability to find one. For example, the telescope that discovered Oumuamua only came online in 2010.

Detectable but still unreachable

Astronomers first reported observations of 3I/ATLAS on July 1, just four months before reaching its deepest penetration into the Solar System. Unfortunately for astronomers, the particulars of this object’s trajectory will bring it to perihelion when the Earth is on the opposite side of the Sun. The nearest 3I/ATLAS will come to Earth is about 170 million miles (270 million kilometers) in December, eliminating any chance for high-resolution imaging. The viewing geometry also means the Sun’s glare will block all direct views of the comet from Earth until next month.

The James Webb Space Telescope observed interstellar comet 3I/ATLAS on August 6 with its Near-Infrared Spectrograph instrument. Credit: NASA/James Webb Space Telescope

Because of that, the closest any active spacecraft will get to 3I/ATLAS happened Friday, when it passed less than 20 million miles (30 million kilometers) from Mars. NASA’s Perseverance rover and Mars Reconnaissance Orbiter were expected to make observations of 3I/ATLAS, along with Europe’s Mars Express and ExoMars Trace Gas Orbiter missions.

The best views of the object so far have been captured by the James Webb Space Telescope and the Hubble Space Telescope, positioned much closer to Earth. Those observations helped astronomers narrow down the object’s size, but the estimates remain imprecise. Based on Hubble’s images, the icy core of 3I/ATLAS is somewhere between the size of the Empire State Building to something a little larger than Central Park.

That may be the most we’ll ever know about the dimensions of 3I/ATLAS. The spacecraft at Mars lack the exquisite imaging sensitivity of Webb and Hubble, so don’t expect spectacular views from Friday’s observations. But scientists hope to get a better handle on the cloud of gas and dust surrounding the object, giving it the appearance of a comet. Spectroscopic observations have shown the coma around 3I/ATLAS contains water vapor and an unusually strong signature of carbon dioxide extending out nearly a half-million miles.

On Tuesday, the European Space Agency released the first grainy images of 3I/ATLAS captured at Mars. The best views will come from a small telescope called HiRISE on NASA’s Mars Reconnaissance Orbiter. The images from NASA won’t be released until after the end of the ongoing federal government shutdown, according to a member of the HiRISE team.

Europe’s ExoMars Trace Gas Orbiter turned its eyes toward interstellar comet 3I/ATLAS as it passed close to Mars on Friday, October 3. The comet’s coma is visible as a fuzzy blob surrounding its nucleus, which was not resolved by the spacecraft’s camera. Credit: ESA/TGO/CaSSIS

Studies of 3I/ATLAS suggest it was probably kicked out of another star system, perhaps by an encounter with a giant planet. Comets in our Solar System sometimes get ejected into the Milky Way galaxy when they come too close to Jupiter. It roamed the galaxy for billions of years before arriving in the Sun’s galactic neighborhood.

The rogue comet is now gaining speed as gravity pulls it toward perihelion, when it will max out at a relative velocity of 152,000 mph (68 kilometers per second), much too fast to be bound into a closed orbit around the Sun. Instead, the comet will catapult back into the galaxy, never to be seen again.

We need to talk about aliens

Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There’s a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.

One study from the University of Oxford suggests that 3I/ATLAS came from the “thick disk” of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.

This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn’t need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.

Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world’s space agencies don’t send a probe to actually visit one. Loeb himself proposed redirecting NASA’s Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to “rejuvenate” the Juno mission by breaking out of Jupiter’s orbit and taking aim at 3I/ATLAS for a close-up inspection.

The problem is that Juno simply doesn’t have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.

Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. “We had no role in that paper,” Bolton told Ars. “He assumed propellant that we don’t really have.”

Avi Loeb, a Harvard University astrophysicist. Credit: Anibal Martel/Anadolu Agency via Getty Images

So Loeb’s exercise was moot, but his talk of aliens has garnered public attention. Loeb appeared on the conservative network Newsmax last week to discuss his theory of 3I/ATLAS alongside Rep. Tim Burchett (R-Tenn.). Predictably, conspiracy theories abounded. But as of Tuesday, the segment has 1.2 million views on YouTube. Maybe it’s a good thing that people who approve government budgets, especially those without a preexisting interest in NASA, are eager to learn more about the Universe. We will leave it to the reader to draw their own conclusions on that matter.

Loeb’s calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we’ve only known about an incoming interstellar intruder a few months before it comes closest to Earth. That’s not to mention the enormous speeds at which these objects move through the Solar System. It’s just not feasible to build a spacecraft and launch it on such short notice.

Now, some scientists are working on ways to overcome these limitations.

So you’re saying there’s a chance?

One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.

ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.

A long-period comet is an icy visitor from the farthest reaches of the Solar System that has spent little time getting blasted by the Sun’s heat and radiation, freezing its physical and chemical properties much as they were billions of years ago.

Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.

Odds are good that the right comet will come from within the Solar System. “That is the point of the mission,” Snodgrass told Ars.

ESA’s Comet Interceptor will be the first mission to visit a comet coming directly from the outer reaches of the Sun’s realm, carrying material untouched since the dawn of the Solar System. Credit: European Space Agency

But if astronomers detect an interstellar object coming toward us on the right trajectory, there’s a chance Comet Interceptor could reach it.

“I think that the entire science team would agree, if we get really lucky and there’s an interstellar object that we could reach, then to hell with the normal plan, let’s go and do this,” Snodgrass said. “It’s an opportunity you couldn’t just leave sitting there.”

But, he added, it’s “very unlikely” that an interstellar object will be in the right place at the right time. “Although everyone’s always very excited about the possibility, and we’re excited about the possibility, we kind of try and keep the expectations to a realistic level.”

For example, if Comet Interceptor were in space today, there’s no way it could reach 3I/ATLAS. “It’s an unfortunate one,” Snodgrass said. “Its closest point to the Sun, it reaches that on the other side of the Sun from where the Earth is. Just bad timing.” If an interceptor were parked somewhere else in the Solar System, it might be able to get itself in position for an encounter with 3I/ATLAS. “There’s only so much fuel aboard,” Snodgrass said. “There’s only so fast we can go.”

It’s even harder to send a spacecraft to encounter an interstellar object than it is to visit one of the Solar System’s homegrown long-period comets. The calculation of whether Comet Interceptor could reach one of these galactic visitors boils down to where it’s heading and when astronomers discover it.

Snodgrass is part of a team using big telescopes to observe 3I/ATLAS from a distance. “As it’s getting closer to the Sun, it is getting brighter,” he said in an interview.

“You don’t have to claim that they’re aliens to make these exciting,” Snodgrass said. “They’re interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we’re getting now.”

Colin Snodgrass, a professor at the University of Edinburgh, leads the Comet Interceptor science team. Credit: University of Edinburgh

Comets and asteroids are the linchpins for understanding the formation of the Solar System. These modest worlds are the leftover building blocks from the debris that coalesced into the planets. Today, direct observations have only allowed scientists to study the history of one planetary system. An interstellar comet would grow the sample size to two.

Still, Snodgrass said his team prefers to keep their energy focused on reaching a comet originating from the frontier of our own Solar System. “We’re not going to let a very lovely Solar System comet go by, waiting to see ‘what if there’s an interstellar thing?'” he said.

Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. “You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that’s five or 10 years away, but [it’s] entirely realistic.”

An American answer

Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What’s more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.

The American concept is similar to Europe’s Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA’s New Horizons mission that flew by Pluto a decade ago. “These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems,” he said.

An animation of the trajectory of 3I/ATLAS through the inner Solar System. Credit: NASA/JPL

It’s impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. “We don’t have to catch it,” Stern recently told Ars. “We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth’s orbit and onto the comet’s path to cross that path.”

Stern said his team developed a cost estimate for such a mission, and while he didn’t disclose the exact number, he said it would fall under NASA’s cost cap for a Discovery-class mission. The Discovery program is a line of planetary science missions that NASA selects through periodic competitions within the science community. The cost cap for NASA’s next Discovery competition is expected to be $800 million, not including the launch vehicle.

A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover “one or two” interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.

“If we put a spacecraft like this in space for a few years, while it’s waiting, there should be five or 10 to choose from,” he said.

Alan Stern speaks onstage during Day 1 of TechCrunch Disrupt SF 2018 in San Francisco. Credit: Photo by Kimberly White/Getty Images for TechCrunch

Winning NASA funding for a mission like Stern’s concept will not be easy. It must compete with dozens of other proposals, and NASA’s next Discovery competition is probably at least two or three years away. The timing of the competition is more uncertain than usual due to swirling questions about NASA’s budget after the Trump administration announced it wants to cut the agency’s science funding in half.

Comet Interceptor, on the other hand, is already funded in Europe. ESA has become a pioneer in comet exploration. The Giotto probe flew by Halley’s Comet in 1986, becoming the first spacecraft to make close-up observations of a comet. ESA’s Rosetta mission became the first spacecraft to orbit a comet in 2014, and later that year, it deployed a German-built lander to return the first data from the surface of a comet. Both of those missions explored short-period comets.

“Each time that ESA has done a comet mission, it’s done something very ambitious and very new,” Snodgrass said. “The Giotto mission was the first time ESA really tried to do anything interplanetary… And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do.”

“They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it’s fair to say, with what they do with missions,” he said. “But the comet missions, they are things where they’ve really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you’re going is a slightly crazy way of doing things. But it’s the only way to do this mission. And it’s great that we’re trying it.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

We’re about to find many more interstellar interlopers—here’s how to visit one Read More »

how-easter-island’s-giant-statues-“walked”-to-their-final-platforms

How Easter Island’s giant statues “walked” to their final platforms


Workers with ropes could make the moai “walk” in zig-zag motion along roads tailor-made for the purpose.

Easter Island is famous for its giant monumental statues, called moai, built some 800 years ago and typically mounted on platforms called ahu. Scholars have puzzled over the moai on Easter Island for decades, pondering their cultural significance, as well as how a Stone Age culture managed to carve and transport statues weighing as much as 92 tons. One hypothesis, championed by archaeologist Carl Lipo of Binghamton University, among others, is that the statues were transported in a vertical position, with workers using ropes to essentially “walk” the moai onto their platforms.

The oral traditions of the people of Rapa Nui certainly include references to the moai “walking” from the quarry to their platforms, such as a song that tells of an early ancestor who made the statues walk. While there have been rudimentary field tests showing it might have been possible, the hypothesis has also generated a fair amount of criticism. So Lipo has co-authored a new paper published in the Journal of Archaeological Science offering fresh experimental evidence of “walking” moai, based on 3D modeling of the physics and new field tests to recreate that motion.

The first Europeans arrived in the 17th century and found only a few thousand inhabitants on the tiny island (just 14 by 7 miles across) thousands of miles away from any other land. In order to explain the presence of so many moai, the assumption has been that the island was once home to tens of thousands of people. But Lipo thought perhaps the feat could be accomplished with fewer workers. In 2012, Lipo and his colleague, Terry Hunt of the University of Arizona, showed that you could transport a 10-foot, 5-ton moai a few hundred yards with just 18 people and three strong ropes by employing a rocking motion.

In 2018, Lipo followed up with an intriguing hypothesis for how the islanders placed red hats on top of some moai; those can weigh up to 13 tons. He suggested the inhabitants used ropes to roll the hats up a ramp. Lipo and his team later concluded (based on quantitative spatial modeling) that the islanders likely chose the statues’ locations based on the availability of fresh water sources, per a 2019 paper in PLOS One.

The 2012 experiment demonstrated proof of principle, so why is Lipo revisiting it now? “I always felt that the [original] experiment was disconnected to some degree of theory—that we didn’t have particular expectations about numbers of people, rate of transport, road slope that could be walked, and so on,” Lipo told Ars. There were also time constraints because the attempt was being filmed for a NOVA documentary.

“That experiment was basically a test to see if we could make it happen or not,” he explained. “Fortunately, we did, and our joy in doing so is pretty well represented by our hoots and hollers when it started to walk with such limited efforts. Some of the limitation of the work was driven by the nature of TV. [The film crew] just wanted us—in just a day and half—to give it a shot. It was 4: 30 on the last day when it finally worked so we really didn’t get a lot of time to explore variability. We also didn’t have any particular predictions to test.”

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle.

Example of a road moai that fell and was abandoned after an attempt to re-erect it by excavating under its base, leaving it partially buried at an angle. Credit: Carl Lipo

This time around, “We wanted to explore a bit of the physics: to show that what we did was pretty easily predicted by the physical properties of the moai—its shape, size, height, number of people on ropes, etc.—and that our success in terms of team size and rate of walking was consistent with predictions,” said Lipo. “This enables us to address one of the central critiques that always comes up: ‘Well, you did this with a 5-ton version that was 10 feet tall, but it would never work with a 30-ft-tall version that weighs 30 tons or more.'”

All about that base

You can have ahu (platforms) without moai (statues) and moai without ahu, usually along the roads leading to ahu; they were likely being transported and never got to their destination. Lipo and Hunt have amassed a database of 962 moai across the island, compiled through field surveys and photogrammetric documentation. They were particularly interested in 62 statues located along ancient transport roads that seemed to have been abandoned where they fell.

Their analysis revealed that these road moai had significantly wider bases relative to shoulder width, compared to statues mounted on platforms. This creates a stable foundation that lowers the center of mass so that the statue is more conducive to the side-to-side motion of walking transport without toppling over. Platform statues, by contrast, have shoulders wider than the base for a more top-heavy configuration.

The road moai also have a consistent and pronounced forward lean of between 6 degrees to 15 degrees from the vertical position, which moves the center of mass close to or just beyond the base’s front edge. Lipo and Hunt think this was due to careful engineering, not coincidence. It’s not conducive to stable vertical display but it is a boon during walking transport, because the forward lean causes the statue to fall forward when tilted laterally, with the rounded front base edge serving as a crucial pivot point. So every lateral rocking motion results in a forward “step.”

Per the authors, there is strong archaeological evidence that carvers modified the statues once they arrived at their platform destinations, modifying the base to eliminate the lean by removing material from the front. This shifted the center of mass over the base area for a stable upright position. The road moai even lack the carved eye sockets designed to hold white coral eyes with obsidian or red scoria for pupils—a final post-transport step once the statues had been mounted on their platforms.

Based on 3D modeling, Lipo and his team created a precisely scaled replica of one of the road moai, weighing 4.35 metric tons with the same proportions and mass distribution of the original statue. “Of course, we’d love to build a 30-foot-tall version, but the physical impossibility of doing so makes it a challenging task, nor is it entirely necessary,” said Lipo. “Through physics, we can now predict how many people it would take and how it would be done. That is key.”

Lipo's team created 3D models of moai to determine the unique characteristics that made them able to be

Lipo’s team created 3D models of moai to determine the unique characteristics that made them able to be “walked” across Rapa Nui. Credit: Carl Lipo

The new field trials required 18 people, four on each lateral rope and 10 on a rear rope, to achieve the side-to-side walking motion, and they were efficient enough in coordinating their efforts to move the statue forward 100 meters in just 40 minutes. That’s because the method operates on basic pendulum dynamics, per the authors, which minimizes friction between the base and the ground. It’s also a technique that exploits the gradual build-up of amplitude, which “suggests a sophisticated understanding of resonance principles,” Lipo and Hunt wrote.

So the actual statues could have been moved several kilometers over the course of weeks with only modest-sized crews of between 20-50 people, i.e., roughly the size of an extended family or “small lineage group” on Easter Island. Once the crew gets the statue rocking side to side—which can require between 15 to 60 people, depending on the size and weight of the moai—the resulting oscillation only needs minimal energy input from a smaller team of rope handlers to maintain that motion. They mostly provide guidance.

Lipo was not the first to test the walking hypothesis. Earlier work includes that of Czech experimental archaeologist Pavel Pavel, who conducted similar practical experiments on Easter Island in the 1980s after being inspired by Thor Heyerdahl’s Kon Tiki. (Heyerdahl even participated in the experiments.) Pavel’s team was able to demonstrate a kind of “shuffling” motion, and he concluded that just 16 men and one leader were sufficient to transport the statues.

Per Lipo and Hunt, Pavel’s demonstration didn’t result in broad acceptance of the walking hypothesis because it still required a huge amount of effort to tilt the statue, producing more of a twisting motion rather than efficient forward movement. This would only have moved a large statue 100 meters a day under ideal conditions. The base was also likely to be damaged from friction with the ground. Lipo and Hunt maintain this is because Pavel (and others who later tried to reproduce his efforts) used the wrong form of moai for those earlier field tests: those erected on the platforms, already modified for vertical stability and permanent display, and not the road moai with shapes more conducive to vertical transport.

“Pavel deserves recognition for taking oral traditions seriously and challenging the dominant assumption of horizontal transport, a move that invited ridicule from established scholars,” Lipo and Hunt wrote. “His experiments suggested that vertical transport was feasible and consistent with cultural memory. Our contribution builds on this by showing that ancestral engineers intentionally designed statues for walking. Those statues were later modified to stand erect on ceremonial platforms, a transformation that effectively erased the morphological features essential for movement.”

The evidence of the roadways

Lipo and Hunt also analyzed the roadways, noting that these ancient roadbeds had concave cross sections that would have been problematic for moving the statues horizontally using wooden rollers or frames perpendicular to those roads. But that concave shape would help constrain rocking movement during vertical transport. And the moai roads were remarkably level with slopes of, on average, 2–3 percent. For the occasional steeper slopes, such as walking a moai up a ramp to the top of an ahu, Lipo and Hunt’s field experiments showed that these could be navigated successfully through controlled stepping.

Furthermore, the distribution pattern of the roadways is consistent with the road moai being left due to mechanical failure. “Arguments that the moai were placed ceremonially in preparation for quarrying have become more common,” said Lipo. “The algorithm there is to claim that positions are ritual, without presenting anything that is falsifiable. There is no reason why the places the statues fell due to mechanical reasons couldn’t later become ‘ritual,’ in the same way that everything on the island could be claimed to be ritual—a circular argument. But to argue that they were placed there purposefully for ritual purposes demands framing the explanation in a way that is falsifiable.”

Schematic representation of the moai transport method using coordinated rope pulling to achieve a

Schematic representation of the moai transport method using coordinated rope pulling to achieve a “walking” motion. Credit: Carl Lipo and Terry Hunt, 2025

“The only line of evidence that is presented in this way is the presence of ‘platforms’ that were found beneath the base of one moai, which is indeed intriguing,” Lipo continued. “However, those platforms can be explained in other ways, given that the moai certainly weren’t moved from the quarry to the ahu in one single event. They were paused along the way, as is clear from the fact that the roads appear to have been constructed in segments with different features. Their construction appears to be part of the overall transport process.”

Lipo’s work has received a fair share of criticism from other scholars over the years, and his and Hunt’s paper includes a substantial section rebutting the most common of those critiques. “Archaeologists tend to reject (in practice) the idea that the discipline can construct cumulative knowledge,” said Lipo. “In the case of moai transport, we’ve strived to assemble as much empirical evidence as possible and have forwarded an explanation that best accounts for what we can observe. Challenges to these ideas, however, do not come from additional studies with new data but rather just new assertions.”

“This leads the public to believe that we (as a discipline) can never really figure anything out and are always going to be a speculative enterprise, spinning yarns and arguing with each other,” Lipo continued. “With the erosion of trust in science, this is fairly catastrophic to archaeology as a whole but also the whole scientific enterprise. Summarizing the results in the way we do here is an attempt to point out that we can build falsifiable accounts and can make contributions to cumulative knowledge that have empirical consequences—even with something as remarkable as the transport of moai.”

Experimental archaeology is a relatively new field that some believe could be the future of archaeology. “I think experimental archaeology has potential when it’s tied to physics and chemistry,” said Lipo. “It’s not just recreating something and then arguing it was done in the same way in the past. Physics and chemistry are our time machines, allowing us to explain why things are the way they are in the present in terms of the events that occurred in the past. The more we can link the theory needed to explain the present, the better we can explain the past.”

DOI: Journal of Archaeological Science, 2025. 10.1016/j.jas.2025.106383  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How Easter Island’s giant statues “walked” to their final platforms Read More »

one-nasa-science-mission-saved-from-trump’s-cuts,-but-others-still-in-limbo

One NASA science mission saved from Trump’s cuts, but others still in limbo


“Damage is being done already. Even if funding is reinstated, we have already lost people.”

Artist’s illustration of the OSIRIS-APEX spacecraft at asteroid Apophis. Credit: NASA/Goddard Space Flight Center

NASA has thrown a lifeline to scientists working on a mission to visit an asteroid that will make an unusually close flyby of the Earth in 2029, reversing the Trump administration’s previous plan to shut it down.

This mission, named OSIRIS-APEX, was one of 19 operating NASA science missions the White House proposed canceling in a budget blueprint released earlier this year.

“We were called for cancellation as part to the president’s budget request, and we were reinstated and given a plan to move ahead in FY26 (Fiscal Year 2026) just two weeks ago,” said Dani DellaGiustina, principal investigator for OSIRIS-APEX at the University of Arizona. “Our spacecraft appears happy and healthy.”

OSIRIS-APEX repurposes the spacecraft from NASA’s OSIRIS-REx asteroid sample return mission, which deposited its extraterrestrial treasure back on Earth in 2023. The spacecraft was in good shape and still had plenty of fuel, so NASA decided to send it to explore another asteroid, named Apophis, due to pass about 20,000 miles (32,000 kilometers) from the Earth on April 13, 2029.

The flyby of Apophis offers scientists a golden opportunity to see a potential killer asteroid up close. Apophis has a lumpy shape with an average diameter of about 1,100 feet (340 meters), large enough to cause regional devastation if it impacted the Earth. The asteroid has no chance of striking us in 2029 or any other time for the next century, but it routinely crosses the Earth’s path as it circles the Sun, so the long-term risk is non-zero.

It pays to be specific

Everything was going well with OSIRIS-APEX until May, when White House officials signaled their intention to terminate the mission. The Trump administration’s proposed cancellation of 19 of NASA’s operating missions was part of a nearly 50 percent cut to the agency’s science budget in the White House budget request for fiscal year 2026, which began October 1.

Lawmakers in the House and Senate have moved to reject nearly all of the science cuts, with the Senate bill maintaining funding for NASA’s science division at $7.3 billion, the same as fiscal year 2025, while the House bill reduces it to $6 billion, still significantly more than the $3.9 billion for science in the White House budget proposal.

The Planetary Society released this chart showing the 19 operating missions tagged for termination under the White House’s budget proposal.

For a time this summer, Trump’s political appointees at NASA told managers to make plans for the next year assuming Trump’s cuts would be enacted. Finally, last month, those officials relented and instructed agency employees to abide by the House appropriations bill.

The House and Senate still have not agreed on any final budget numbers or sent an appropriations bill to the White House for President Trump’s signature. That’s why the federal government has been partially shut down for the last week. Despite the shutdown, ground teams are still operating NASA’s science missions because suspending them could result in irreparable damage.

Using the House’s proposed budget should salvage much of NASA’s portfolio, but it is still $1.3 billion short of the money the agency’s science program got last year. That means some things will inevitably get cut. Many of the other operating missions the Trump administration tagged for termination remain on the chopping block.

OSIRIS-APEX escaped this fate for a simple reason. Lawmakers earmarked $20 million for the mission in the House budget bill. Most other missions didn’t receive the same special treatment. It seems OSIRIS-APEX had a friend in Congress.

Budget-writers in the House of Representatives specified NASA should commit $20 million for the OSIRIS-APEX mission in fiscal year 2026. Credit: US House of Representatives

The only other operating mission the Trump administration wanted to cancel that got a similar earmark in the House budget bill was the Magnetospheric Multiscale Mission (MMS), a fleet of four probes in space since 2015 studying Earth’s magnetosphere. Lawmakers want to provide $20 million for MMS operations in 2026. Ars was unable to confirm the status of the MMS mission Wednesday.

The other 17 missions set to fall under Trump’s budget ax remain in a state of limbo. There are troubling signs the administration might go ahead and kill the missions. Earlier this year, NASA directed managers from all 19 of the missions at risk of cancellation to develop preliminary plans to wind down their missions.

A scientist on one of the projects told Ars that NASA recently asked for a more detailed “termination plan” to “passivate” their spacecraft by the end of this year. This goes a step beyond the closeout plans NASA requested in the summer. Passivation is a standard last rite for a spacecraft, when engineers command it to vent leftover fuel and drain its batteries, rendering it fully inert. This would make the mission unrecoverable if someone tried to contact it again.

This scientist said none of the missions up for termination will be out of the woods until there’s a budget that restores NASA funding close to last year’s levels and includes language protecting the missions from cancellation.

Damage already done

Although OSIRIS-APEX is again go for Apophis, DellaGiustina said a declining budget has forced some difficult choices. The mission’s science team is “basically on hiatus” until sometime in 2027, meaning they won’t be able to participate in any planning for at least the next year and a half.

This has an outsize effect on younger scientists who were brought on to the mission to train for what the spacecraft will find at Apophis, DellaGiustina said in a meeting Tuesday of the National Academies’ Committee on Astrobiology and Planetary Sciences.

“We are not anticipating we will have to cut any science at Apophis,” she said. But the cuts do affect things like recalibrating the science instruments on the spacecraft, which got dirty and dusty from the mission’s brief landing to capture samples from asteroid Bennu in 2020.

“We are definitely undermining our readiness,” DellaGiustina said. “Nonetheless, we’re happy to be reinstated, so it’s about as good as can be expected, I think, for this particular point in time.”

At its closest approach, asteroid Apophis will be closer to Earth than the ring of geostationary satellites over the equator. Credit: NASA/JPL

The other consequence of the budget reduction has been a drain in expertise with operating the spacecraft. OSIRIS-APEX (formerly OSIRIS-REx) was built by Lockheed Martin, which also commands and receives telemetry from the probe as it flies through the Solar System. The cuts have caused some engineers at Lockheed to move off of planetary science missions to other fields, such as military space programs.

The other active missions waiting for word from NASA include the Chandra X-ray Observatory, the New Horizons probe heading toward interstellar space, the MAVEN spacecraft studying the atmosphere of Mars, and several satellites monitoring Earth’s climate.

The future of those missions remains murky. A senior official on one of the projects said they’ve been given “no direction at all” other than “to continue operating until advised otherwise.”

Another mission the White House wanted to cancel was THEMIS, a pair of spacecraft orbiting the Moon to map the lunar magnetic field. The lead scientist for that mission, Vassilis Angelopoulos from the University of California, Los Angeles, said his team will get “partial funding” for fiscal year 2026.

“This is good, but in the meantime, it means that science personnel is being defunded,” Angelopoulos told Ars. “The effect is the US is not achieving the scientific return it can from its multi-billion dollar investments it has made in technology.”

Artist’s concept of NASA’s MAVEN spacecraft, which has orbited Mars since 2014 studying the planet’s upper atmosphere.

To put a number on it, the missions already in space that the Trump administration wants to cancel represent a cumulative investment of $12 billion to design and build, according to the Planetary Society, a science advocacy group. An assessment by Ars concluded the operating missions slated for cancellation cost taxpayers less than $300 million per year, or between 1 and 2 percent of NASA’s annual budget.

Advocates for NASA’s science program met at the US Capitol this week to highlight the threat. Angelopoulos said the outcry from scientists and the public seems to be working.

“I take the implementation of the House budget as indication that the constituents’ pressure is having an effect,” he said. “Unfortunately, damage is being done already. Even if funding is reinstated, we have already lost people.”

Some scientists worry that the Trump administration may try to withhold funding for certain programs, even if Congress provides a budget for them. That would likely trigger a fight in the courts.

Bruce Jakosky, former principal investigator of the MAVEN Mars mission, raised this concern. He said it’s a “positive step” that NASA is now making plans under the assumption the agency will receive the budget outlined by the House. But there’s a catch.

“Even if the budget that comes out of Congress gets signed into law, the president has shown no reluctance to not spend money that has been legally obligated,” Jakosky wrote in an email to Ars. “That means that having a budget isn’t the end; and having the money get distributed to the MAVEN science and ops team isn’t the end—only when the money is actually spent can we be assured that it won’t be clawed back.

“That means that the uncertainty lives with us throughout the entire fiscal year,” he said. “That uncertainty is sure to drive morale problems.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

One NASA science mission saved from Trump’s cuts, but others still in limbo Read More »

building-ordered-polymers-with-metal

Building ordered polymers with metal

Unlike traditional polymers, this structure allows MOFs to have open internal spaces with a well-defined size, which can allow some molecules to pass through while filtering out others. In addition, the presence of metals provides for interesting chemistry. The metals can serve as catalysts or preferentially bind to one molecule within a mixture.

Knowing what we know now, it all seems kind of obvious that this would work. But when Robson started his work at the University of Melbourne, the few people who thought about the issue at all expected that the molecules he was building would be unstable and collapse.

The first MOF Robson built used copper as its metal of choice. It was linked to an organic molecule that retained its rigid structure through the presence of a benzene ring, which doesn’t bend. Both the organic molecule and the copper could form four different bonds, allowing the structure to grow by doing the rough equivalent of stacking a bunch of three-sided pyramids—a conscious choice by Robson.

Image of multiple triangular chemicals stacked on top of each other, forming a structure with lots of open internal spaces.

The world’s first MOF, synthesized by Robson and his colleagues. Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

In this case, however, the internal cavities remained filled by the solvent in which the MOF was formed. But the solvent could move freely through the material. Still, based on this example, Robson predicted many of the properties that have since been engineered into different MOFs: the ability to retain their structure even after solvents are removed, the presence of catalytic sites, and the ability of MOFs to act as filters.

Expanding the concept

All of that might seem a very optimistic take for someone’s first effort. But the measure of Robson’s success is that he convinced other chemists of the potential. One was Susumu Kitagawa of Kyoto University. Kitagawa and his collaborators built a MOF that had large internal channels that extended the entire length of the material. Made in a watery solution, the MOF could be dried out and have gas flow through it, with the structure retaining molecules like oxygen, nitrogen, and methane.

Building ordered polymers with metal Read More »

floating-electrons-on-a-sea-of-helium

Floating electrons on a sea of helium

By now, a handful of technologies are leading contenders for producing a useful quantum computer. Companies have used them to build machines with dozens to hundreds of qubits, the error rates are coming down, and they’ve largely shifted from worrying about basic scientific problems to dealing with engineering challenges.

Yet even at this apparently late date in the field’s development, there are companies that are still developing entirely new qubit technologies, betting the company that they have identified something that will let them scale in ways that enable a come-from-behind story. Recently, one of those companies published a paper that describes the physics of their qubit system, which involves lone electrons floating on top of liquid helium.

Trapping single electrons

So how do you get an electron to float on top of helium? To find out, Ars spoke with Johannes Pollanen, the chief scientific officer of EeroQ, the company that accomplished the new work. He said that it’s actually old physics, with the first demonstrations of it having been done half a century ago.

“If you bring a charged particle like an electron near the surface, because the helium is dielectric, it’ll create a small image charge underneath in the liquid,” said Pollanen. “A little positive charge, much weaker than the electron charge, but there’ll be a little positive image there. And then the electron will naturally be bound to its own image. It’ll just see that positive charge and kind of want to move toward it, but it can’t get to it, because the helium is completely chemically inert, there are no free spaces for electrons to go.”

Obviously, to get the helium liquid in the first place requires extremely low temperatures. But it can actually remain liquid up to temperatures of 4 Kelvin, which doesn’t require the extreme refrigeration technologies needed for things like transmons. Those temperatures also provide a natural vacuum, since pretty much anything else will also condense out onto the walls of the container.

Diagrams of a chip showing channels and electrodes, along with an image of the chip itself.

The chip itself, along with diagrams of its organization. The trap is set by the gold electrode on the left. Dark channels allow liquid helium and electrons to flow into and out of the trap. And the bluish electrodes at the top and bottom read the presence of the electrons. Credit: EeroQ

Liquid helium is also a superfluid, meaning it flows without viscosity. This allows it to easily flow up tiny channels cut into the surface of silicon chips that the company used for its experiments. A tungsten filament next to the chip was used to load the surface of the helium with electrons at what you might consider the equivalent of a storage basin.

Floating electrons on a sea of helium Read More »

2025-nobel-prize-in-physics-awarded-for-macroscale-quantum-tunneling

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling


John Clarke, Michel H. Devoret, and John Martinis built an electrical circuit-based oscillator on a microchip.

A device consisting of four transmon qubits, four quantum buses, and four readout resonators fabricated by IBM in 2017. Credit: ay M. Gambetta, Jerry M. Chow & Matthias Steffen/CC BY 4.0

The 2025 Nobel Prize in Physics has been awarded to John Clarke, Michel H. Devoret, and John M. Martinis “for the discovery of macroscopic quantum tunneling and energy quantization in an electrical circuit.” The Nobel committee said during a media briefing that the laureates’ work provides opportunities to develop “the next generation of quantum technology, including quantum cryptography, quantum computers, and quantum sensors.” The three men will split the $1.1 million (11 million Swedish kronor) prize money. The presentation ceremony will take place in Stockholm on December 10, 2025.

“To put it mildly, it was the surprise of my life,” Clarke told reporters by phone during this morning’s press conference. “Our discovery in some ways is the basis of quantum computing. Exactly at this moment where this fits in is not entirely clear to me. One of the underlying reasons that cellphones work is because of all this work.”

When physicists began delving into the strange new realm of subatomic particles in the early 20th century, they discovered a realm where the old, deterministic laws of classical physics no longer apply. Instead, uncertainty reigns supreme. It is a world governed not by absolutes, but by probabilities, where events that would seem impossible on the macroscale occur on a regular basis.

For instance, subatomic particles can “tunnel” through seemingly impenetrable energy barriers. Imagine that an electron is a water wave trying to surmount a tall barrier. Unlike water, if the electron’s wave is shorter than the barrier, there is still a small probability that it will seep through to the other side.

This neat little trick has been experimentally verified many times. In the 1950s, physicists devised a system in which the flow of electrons would hit an energy barrier and stop because they lacked sufficient energy to surmount that obstacle. But some electrons didn’t follow the established rules of behavior. They simply tunneled right through the energy barrier.

(l-r): John Clarke, Michel H. Devoret and John M. Martinis

(l-r): John Clarke, Michel H. Devoret, and John M. Martinis. Credit: Niklas Elmehed/Nobel Prize Outreach

From subatomic to the macroscale

Clarke, Devoret, and Martinis were the first to demonstrate that quantum effects, such as quantum tunneling and energy quantization, can operate on macroscopic scales, not just one particle at a time.

After earning his PhD from the University of Cambridge, Clarke came to the University of California, Berkeley, as a postdoc, eventually joining the faculty in 1969. By the mid-1980s, Devoret and Martinis had joined Clarke’s lab as a postdoc and graduate student, respectively. The trio decided to look for evidence of macroscopic quantum tunneling using a specialized circuit called a Josephson junction—a macroscopic device that takes advantage of a tunneling effect that is now widely used in quantum computing, quantum sensing, and cryptography.

A Josephson junction—named after British physicist Brian Josephson, who won the 1973 Nobel Prize in physics—is basically two semiconductor pieces separated by an insulating barrier. Despite this small gap between the two conductors, electrons can still tunnel through the insulator and create a current. That occurs at sufficiently low temperatures, when the junction becomes superconducting as electrons form so-called “Cooper pairs.”

The team built an electrical circuit-based oscillator on a microchip measuring about 1 centimeter in size—essentially a quantum version of the classic pendulum. Their biggest challenge was figuring out how to reduce the noise in their experimental apparatus. For their experiments, they first fed a weak current into the junction and measured the voltage—initially zero. Then they increased the current and measured how long it took for the system to tunnel out of its enclosed state to produce a voltage.

Credit: Johan Jarnestad/The Royal Swedish Academy of Sciences

They took many measurements and found that the average current increases as the device’s temperature falls, as expected. But at some point, the temperature got so low that the device became superconducting and the average current became independent of the device’s temperature—a telltale signature of macroscopic quantum tunneling.

The team also demonstrated that the Josephson junction exhibited quantized energy levels—meaning the energy of the system was limited to only certain allowed values, just like subatomic particles can gain or lose energy only in fixed, discrete amounts—confirming the quantum nature of the system. Their discovery effectively revolutionized quantum science, since other scientists could now test precise quantum physics on silicon chips, among other applications.

Lasers, superconductors, and superfluid liquids exhibit quantum mechanical effects at the macroscale, but these arise by combining the behavior of microscopic components. Clarke, Devoret, and Martinis were able to create a macroscopic effect—a measurable voltage—from a macroscopic state. Their system contained billions of Cooper pairs filling the entire superconductor on the chip, yet all of them were described by a single wave function. They behave like a large-scale artificial atom.

In fact, their circuit was basically a rudimentary qubit. Martinis showed in a subsequent experiment that such a circuit could be an information-bearing unit, with the lowest energy state and the first step upward functioning as a 0 and a 1, respectively. This paved the way for such advances as the transmon in 2007: a superconducting charge qubit with reduced sensitivity to noise.

“That quantization of the energy levels is the source of all qubits,” said Irfan Siddiqi, chair of UC Berkeley’s Department of Physics and one of Devoret’s former postdocs. “This was the grandfather of qubits. Modern qubit circuits have more knobs and wires and things, but that’s just how to tune the levels, how to couple or entangle them. The basic idea that Josephson circuits could be quantized and were quantum was really shown in this experiment. The fact that you can see the quantum world in an electrical circuit in this very direct way was really the source of the prize.”

So perhaps it is not surprising that Martinis left academia in 2014 to join Google’s quantum computing efforts, helping to build a quantum computer the company claimed had achieved “quantum supremacy” in 2019. Martinis left in 2020 and co-founded a quantum computing startup, Qolab, in 2022. His fellow Nobel laureate, Devoret, now leads Google’s quantum computing division and is also a faculty member at the University of California, Santa Barbara. As for Clarke, he is now a professor emeritus at UC Berkeley.

“These systems bridge the gap between microscopic quantum behavior and macroscopic devices that form the basis for quantum engineering,” Gregory Quiroz, an expert in quantum information science and quantum algorithms at Johns Hopkins University, said in a statement. “The rapid progress in this field over the past few decades—in part fueled by their critical results—has allowed superconducting qubits to go from small-scale laboratory experiments to large-scale, multi-qubit devices capable of realizing quantum computation. While we are still on the hunt for undeniable quantum advantage, we would not be where we are today without many of their key contributions to the field.”

As is often the case with fundamental research, none of the three physicists realized at the time how significant their discovery would be in terms of its impact on quantum computing and other applications.

“This prize really demonstrates what the American system of science has done best,” Jonathan Bagger, CEO of the American Physical Society, told The New York Times. “It really showed the importance of the investment in research for which we do not yet have an application, because we know that sooner or later, there will be an application.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

2025 Nobel Prize in Physics awarded for macroscale quantum tunneling Read More »

here’s-the-real-reason-endurance-sank

Here’s the real reason Endurance sank


The ship wasn’t designed to withstand the powerful ice compression forces—and Shackleton knew it.

The Endurance, frozen and keeled over in the ice of the Weddell Sea. Credit: BF/Frank Hurley

In 1915, intrepid British explorer Sir Ernest Shackleton and his crew were stranded for months in the Antarctic after their ship, Endurance, was trapped by pack ice, eventually sinking into the freezing depths of the Weddell Sea. Miraculously, the entire crew survived. The prevailing popular narrative surrounding the famous voyage features two key assumptions: that Endurance was the strongest polar ship of its time, and that the ship ultimately sank after ice tore away the rudder.

However, a fresh analysis reveals that Endurance would have sunk even with an intact rudder; it was crushed by the cumulative compressive forces of the Antarctic ice with no single cause for the sinking. Furthermore, the ship wasn’t designed to withstand those forces, and Shackleton was likely well aware of that fact, according to a new paper published in the journal Polar Record. Yet he chose to embark on the risky voyage anyway.

Author Jukka Tuhkuri of Aalto University is a polar explorer and one of the leading researchers on ice worldwide. He was among the scientists on the Endurance22 mission that discovered the Endurance shipwreck in 2022, documented in a 2024 National Geographic documentary. The ship was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the Endurance22 expedition’s exploration director, Mensun Bound, told The New York Times at the time that the shipwreck was the finest example he’s ever seen; Endurance was “in a brilliant state of preservation.”

As previously reported, Endurance set sail from Plymouth on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By the time they reached the Weddell Sea in January 1915, accumulating pack ice and strong gales slowed progress to a crawl. Endurance became completely icebound on January 24, and by mid-February, Shackleton ordered the boilers to be shut off so that the ship would drift with the ice until the weather warmed sufficiently for the pack to break up. It would be a long wait. For 10 months, the crew endured the freezing conditions. In August, ice floes pressed into the ship with such force that the ship’s decks buckled.

The ship’s structure nonetheless remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them. Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in the late afternoon on November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean before closing again to erase any trace of the wreckage.

Once the wreck had been found, the team recorded as much as they could with high-resolution cameras and other instruments. Vasarhelyi, particularly, noted the technical challenge of deploying a remote digital 4K camera with lighting at 9,800 feet underwater, and the first deployment at that depth of photogrammetric and laser technology. This resulted in a millimeter-scale digital reconstruction of the entire shipwreck to enable close study of the finer details.

Challenging the narrative

The ice and wave tank at Aalto University

The ice and wave tank at Aalto University. Credit: Aalto University

It was shortly after the Endurance22 mission found the shipwreck that Tuhkuri realized that there had never been a thorough structural analysis conducted of the vessel to confirm the popular narrative. Was Endurance truly the strongest polar ship of that time, and was a broken rudder the actual cause of the sinking? He set about conducting his own investigation to find out, analyzing Shackleton’s diaries and personal correspondence, as well as the diaries and correspondence of several Endurance crew members.

Tuhkuri also conducted a naval architectural analysis of the vessel under the conditions of compressive ice, which had never been done before. He then compared those results with the underwater images of the Endurance shipwreck. He also looked at comparable wooden polar expedition ships and steel icebreakers built in the late 1800s and early 1900s.

Endurance was originally named Polaris; Shackleton renamed it when he purchased the ship in 1914 for his doomed expedition. Per Tuhkuri, the ship had a lower (tween) deck, a main deck, and a short bridge deck above them that stopped at the machine room in order to make space for the steam engine and boiler. There were no beams in the machine room area, nor any reinforcing diagonal beams, which weakened this significant part of the ship’s hull.

This is because Endurance was originally built for polar tourism and for hunting polar bears and walruses in the Arctic; at the ice edge, ships only needed sufficiently strong planking and frames to withstand the occasional collision from ice floes. However, “In pack ice conditions, where compression from the ice needs to be taken into account, deck beams become of key importance,” Tuhkuri wrote. “It is the deck beams that keep the two ship sides apart and maintain the shape of a ship. Without strong enough deck beams, a vessel gets crushed by compressive ice, more or less irrespective of the thickness of planking and frames.”

The Endurance was nonetheless sturdy enough to withstand five serious ice compression events before her final sinking. On April 4, 1915, one of the scientists on board reported hearing loud rumbling noises from a 3-meter-high ice ridge that formed near the ship, causing the ship to vibrate. Tuhkuri believes this was due to a “compressive failure process” as ice crushed against the hull. On July 14, a violent snowstorm hit, and crew members could hear the ice breaking beneath the ship. The ice ridges that formed over the next few days were sufficiently concerning that Shackleton instituted four-hour watches on deck and insisted on having everything packed in case they had to abandon ship.

Crushed by the ice

Idealized cross sections of early Antarctic ships. Endurance was type (a); Fram and Deutschland were type (b).

Idealized cross sections of early Antarctic ships. Endurance was type (a); Deutschland was type (b). Credit: J. Tuhkuri, 2025

On August 1, an ice floe fractured and grinding noises were heard beneath the ship as the floe piled underneath it, lifting Endurance and causing her to first heel starboard and then heel to port, as several deck beams began to buckle. Similar compression events kept happening until there was a sudden escalation on September 30. The hull began vibrating hard enough to shake the whole rigging as even more ice crushed against the hull. Even the linoleum on the floors buckled; Harry McNish wrote in his diary that it looked like Endurance “was going to pieces.”

Yet another ice compression event occurred on October 17, pushing the vessel one meter into the air as the iron plates on the engine room’s floor buckled and slid over each other. Ship scientist Reginald James wrote that “for a time things were not good as the pressure was mostly along the region of the engine room where there are no beams of any strength,” while Captain Worsley described the engine room as “the weakest part of the ship.”

By the afternoon, Endurance was heeled almost 30 degrees to port, so much so that the keel was visible from the starboard side, per Tuhkuri, although the ice started to fracture in the evening so that the ship could shift upright again. The crew finally abandoned ship on October 27 after an even more severe compression event hit a few days before. Endurance finally sank below the ice on November 21.

Tuhkuri’s analysis of the structural damage to Endurance revealed that the rudder and the stern post were indeed torn off, confirmed by crew correspondence and diaries and by the underwater images taken of the wreck. The keel was also ripped off, with McNish noting in his diary that the ship broke into two halves as a result. The underwater images are less clear on this point, but Tuhkuri writes that there is something “some distance forward from the rudder, on the port side” that “could be the end of a displaced part of the keel sticking up from under the ship.”

All the diaries mentioned the buckling and breaking of deck beams, and there was much structural damage to the ship’s sides; for instance, Worsley writes of “great spikes of ice… forcing their way through the ship’s sides.” There are no visible holes in the wreck’s sides in the underwater images, but Tuhkuri posits that the damage is likely buried in the mud on the sea bed, given that by late October, Endurance “was heavily listed and the bottom was exposed.”

Jukka Tuhkari on the polar ice

Jukka Tuhkuri on the ice. Credit: Aalto University

Based on his analysis, Tuhkuri concluded that the rudder wasn’t the sole or primary reason for the ship’s sinking. “Endurance would have sunk even if it did not have a rudder at all,” Tuhkuri wrote; it was crushed by the ice, with no single reason for its eventual sinking. Shackleton himself described the process as ice floes “simply annihilating the ship.”

Perhaps the most surprising finding is that Shackleton knew of Endurance‘s structural shortcomings even before undertaking the voyage. Per Tuhkuri, the devastating effects of compressive ice on ships were known to shipbuilders in the early 1900s. An early Swedish expedition was forced to abandon its ship Antarctic in February 1903 when it became trapped in the ice. Things progressed much like Endurance: the ice lifted Antarctic up so that the ship heeled over, with ice-crushed sides, buckling beams, broken planking, and a damaged rudder and stern post. The final sinking occurred when an advancing ice floe ripped off the keel.

Shackleton knew of Antarctic‘s fate and had even been involved in the rescue operation. He also helped Wilhelm Filchner make final preparations for Filchner’s 1911–1913 polar expedition with a ship named Deutschland; he even advised his colleague to strengthen the ship’s hull by adding diagonal beams, the better to withstand the Weddell Sea ice. Filchner did so, and as a result, Deutschland survived eight months of being trapped in compressive ice until the ship was finally able to break free and sail home. (It took a torpedo attack in 1917 to sink the good ship Deutschland.)

The same shipyard that modified Deutschland had also just signed a contract to build Endurance (then called Polaris). So both Shackleton and the shipbuilders knew how destructive compressive ice could be and how to bolster a ship against it. Yet Endurance was not outfitted with diagonal beams to strengthen its hull. And knowing this, Shackleton bought Endurance anyway for his 1914–1915 voyage. In a 1914 letter to his wife, he even compared the strength of its construction unfavorably with that of the Nimrod, the ship he used for his 1907–1909 expedition. So Shackleton had to know he was taking a big risk.

“Even simple structural analysis shows that the ship was not designed for the compressive pack ice conditions that eventually sank it,” said Tuhkuri. “The danger of moving ice and compressive loads—and how to design a ship for such conditions—was well understood before the ship sailed south. So we really have to wonder why Shackleton chose a vessel that was not strengthened for compressive ice. We can speculate about financial pressures or time constraints, but the truth is, we may never know. At least we now have more concrete findings to flesh out the stories.”

Polar Record, 2025. DOI: 10.1017/S0032247425100090 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Here’s the real reason Endurance sank Read More »

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »