Science

discovery-of-hms-endeavour-wreck-confirmed

Discovery of HMS Endeavour wreck confirmed

By 2016, RIMAP’s volunteers, operating on grants and private donations, had located 10 of the 13 wrecks, almost exactly where historical charts said they should be. And the search had gotten a boost from the 1998 discovery of a 200-year-old paper trail linking the troop transport Lord Sandwich to its former life as HMS Endeavour.

Narrowing the field

One candidate was found just 500 meters off the coast of Rhode Island (designated RI 2394), 14 meters below the surface and buried in nearly 250 years’ worth of sediment and silt. RIMAP’s team concluded in 2018 that this was likely the wreck of the Endeavour, although the researchers emphasized that they needed to accumulate more evidence to support their conclusions. That’s because only about 15 percent of the ship survived. Any parts of the hull that weren’t quickly buried by silt have long since decomposed in the water.

The ANMN felt confident enough in its own research by 2022 to hold that controversial news conference announcing the discovery, against RIMAP’s objections. But the evidence is now strong enough for RIMAP to reach the same conclusion. “In 1999 and again in 2019, RIMAP and ANMM agreed on a set of criteria that, if satisfied, would permit identification of RI 2394 as Lord Sandwich,” the authors wrote in the report’s introduction. “Based on the agreed preponderance of evidence approach, enough of these criteria have now been met… to positively identify RI 2394 as the remnants of Lord Sandwich, formerly James Cook’s HM Bark Endeavour.

The Rhode Island Historical Preservation and Heritage Commission and the ANMM are now collaborating to ensure that the wreck site is protected in the future.

Discovery of HMS Endeavour wreck confirmed Read More »

researchers-get-viable-mice-by-editing-dna-from-two-sperm

Researchers get viable mice by editing DNA from two sperm


Altering chemical modifications of DNA lets the DNA from two sperm make a mouse.

For many species, producing an embryo is a bit of a contest between males and females. Males want as many offspring as possible and want the females to devote as many resources as possible to each of them. Females do better by keeping their options open and distributing resources in a way to maximize the number of offspring they can produce over the course of their lives.

In mammals, this plays out through the chemical modification of DNA, a process called imprinting. Males imprint their DNA by adding methyl modifications to it in a way that alters the activity of genes in order to promote the growth of embryos. Females do similar things chemically but focus on shutting down genes that promote embryonic growth. In a handful of key regions of the genome, having only the modifications specific to one sex is lethal, as the embryo can’t grow to match its stage of development.

One consequence of this is that you normally can’t produce embryos using only the DNA from eggs or from sperm. But over the last few years, researchers have gradually worked around the need for imprinted sites to have one copy from each parent. Now, in a very sophisticated demonstration, researchers have used targeted editing of methylation to produce mice from the DNA of two sperm.

Imprinting and same-sex parents

There’s a long history of studying imprinting in mice. Long before the genome was sequenced, people had identified specific parts of the chromosomes that, if deleted, were lethal—but only if inherited from one of the two sexes. They correctly inferred that this meant that the genes in the region are normally inactivated in the germ cells of one of the sexes. If they’re deleted in the other sex, then the combination that results in the offspring—missing on one chromosome, inactivated in the other—is lethal.

Over time, seven critical imprinted regions were identified, scattered throughout the genome. And, roughly 20 years ago, a team managed to find the right deletion to enable a female mouse to give birth to offspring that received a set of chromosomes from each of two unfertilized eggs. The researchers drew parallels to animals that can reproduce through parthenogenesis, where the female gives birth using unfertilized eggs. But the mouse example obviously took a big assist via the manipulation of egg cells in culture before being implanted in a mouse.

By 2016, researchers were specifically editing in deletions of imprinted genes in order to allow the creation of embryos by fusing stem cell lines that only had a single set of chromosomes. This was far more focused than the original experiment, as the deletions were smaller and affected only a few genes. By 2018, they had expanded the repertoire by figuring out how to get the genomes of two sperm together in an unfertilized egg with its own genome eliminated.

The products of two male parents, however, died the day after birth. This is either due to improperly compensating for imprinting or simply because the deletions had additional impacts on the embryo’s health. It took until earlier this year, when a very specific combination of 20 different gene edits and deletions enabled mice generated using the chromosomes from two sperm cells to survive to adulthood.

The problem with all of these efforts is that the deletions may have health impacts on the animals and may still cause problems if inherited from the opposite sex. So, while it’s an interesting way to confirm our understanding of the role of imprinting in reproduction, it’s not necessarily the route to using this as a reliable reproductive tool. Which finally brings us to the present research.

Roll your own imprinting

Left out of the above is the nature of the imprinting itself: How does a chunk of chromosome and all the genes on it get marked as coming from a male or female? The secret is to chemically modify that region of the DNA in a way that doesn’t alter base pairing, but does allow it to be recognized as distinct by proteins. The most common way of doing this is to link a single carbon atom (a methyl group) to the base cytosine. This tends to shut nearby genes down, and it can be inherited through cell division, since there are enzymes that recognize when one of the two DNA strands is unmodified and adds a methyl to it.

Methylation turns out to explain imprinting. The key regions for imprinting are methylated differently in males and females, which influences nearby gene activity and can be maintained throughout all of embryonic development.

So, to make up for the imprinting problems caused when both sets of chromosomes come from the same sex, what you need to do is a targeted reprogramming of methylation. And that’s what the researchers behind the new paper have done.

First, they needed to tell the two sets of chromosomes apart. To do that, they used two distantly related strains of mice, one standard lab strain that originated in Europe and a second that was caught in the wild in Thailand less than a century ago. These two strains have been separated for long enough that they have a lot of small differences in DNA sequences scattered throughout the genome. So, it was possible to use these to target one or the other of the genomes.

This was done using parts of the DNA editing systems that have been developed, the most famous of which is CRISPR/CAS. These systems have a protein that pairs with an RNA sequence to find a matching sequence in DNA. In this case, those RNAs could be made so that they target imprinting regions in just one of the two mouse strains. The protein/RNA combinations could also be linked to enzymes that modify DNA, either adding methyls or removing them.

To bring all this together, the researchers started with an egg and deleted the genome from it. They then injected the heads of sperm, one from the lab strain, one from the recently wild mouse. This left them with an egg with two sets of chromosomes, although a quarter of them would have two Y chromosomes and thus be inviable (unlike the Y, the X has essential genes). Arbitrarily, they chose one set of chromosomes to be female and targeted methylation and de-methylation enzymes to it in order to reprogram the pattern of methylation on it. Once that was done, they could allow the egg to start dividing and implant it into female mice.

Rare success

The researchers spent time ensuring that the enzymes they had were modifying the methylation as expected and that development started as usual. Their general finding is that the enzymes did change the methylation state for about 500 bases on either side of the targeted site and did so pretty consistently. But there are seven different imprinting sites that need to be modified, each of which controls multiple nearby genes. So, while the modifications were consistent, they weren’t always thorough enough to result in the expected changes to all of the nearby genes.

This limited efficiency showed up in the rate of survival. Starting with over 250 reprogrammed embryos that carried DNA from two males, they ended up with 16 pregnancies, but only four that died at birth, and three live ones; based on other experiments, most of the rest died during the second half of embryonic development. Of the three live ones, one was nearly 40 percent larger than the typical pup, suggesting problems regulating growth—it died the day after birth.

All three live births were male, although the numbers are small enough that it’s impossible to tell if that’s significant or not.

The researchers suggest several potential reasons for the low efficiency. One is simply that, while the probability of properly reprogramming at least one of the sites is high, reprogramming all seven is considerably more challenging. There’s also the risk of off-target effects, where the modification takes place in locations with similar sequences to the ones targeted. They also concede that there could be other key imprinted regions that we simply haven’t identified yet.

We would need to sort that out if we want to use this approach as a tool, which might be potentially useful as a way to breed mice that carry mutations that affect female viability or fertility. But this work has already been useful even in its inefficient state, because it serves as a pretty definitive validation of our ideas about the function of imprinting in embryonic development, as well as the critical role methylation plays in this process. If we weren’t largely right about both of those, the efficiency of this approach wouldn’t be low—it would be zero.

PNAS, 2025. DOI: 10.1073/pnas.2425307122  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers get viable mice by editing DNA from two sperm Read More »

sailing-the-fjords-like-the-vikings-yields-unexpected-insights

Sailing the fjords like the Vikings yields unexpected insights


“On we sweep with threshing oar”

Greer Jarrett has identified four possible small ports, or “havens,” used by Vikings along the Norwegian coast.

Experimental archaeologist Greer Jarrett of Lund University in Sweden has been sailing in the footsteps of Vikings for the last three years.

If you want to learn more about how and where the Vikings sailed, making the journey through the fjords yourself in replica boats is a practical, hands-on approach to achieving that end. Greer Jarrett, an archaeologist at Lund University in Sweden, has spent the last three years doing just that, sailing more than 5,000 kilometers along known Viking trade routes in open, spare-rigged clinker boats similar to those used by the Vikings.

Not only has Jarrett learned a great deal about the boats themselves, he also identified four possible havens along the Norwegian coast, part of what may have been a decentralized network that played a crucial role in trade and travel during that period. And those ports are located farther out to sea than other major ports and hubs known to date, according to a paper he published in the Journal of Archaeological Method and Theory.

It’s just the latest intriguing discovery enabled by the growing field of experimental archaeology, whereby researchers seek to reverse-engineer all manner of ancient technologies. Experimental archaeologists have, for instance, built their own versions of Early Upper Paleolithic adzes, axes, and chisels. The resulting fractures and wear enabled them to develop new criteria for identifying the likely functions of ancient tools. Others have tried to cook like the Neanderthals, concluding that flint flakes were surprisingly effective for butchering birds, and that roasting the birds damages the bones to such an extent that it’s unlikely they would be preserved in the archaeological record.

Kent State University’s Metin Eren has done practical experiments to study, for instance, the trajectories of atlatls attached to spears tipped with replica Clovis points, and how their performance compares to javelins used by Neanderthals. He even fashioned rudimentary blades out of his own frozen feces to test whether they could cut through pig hide, muscle, and tendon—solely to test a famous anthropological legend about an elderly Inuit man in the 1950s who purportedly did the same to kill and skin a dog, using its rib cage as a makeshift sled to venture off into the Arctic. (It did not work, so myth: busted. But it did snag Eren an Ig Nobel prize.)

Taking a hands-on, experimental archaeological approach to studying the Vikings makes sense in light of the dearth of contemporary written sources. “We have a few things written by outsiders, but there’s very, very few accounts written or delivered by people from Scandinavia during that period,” Jarrett told Ars. “We normally rely on indirect forms of evidence, be that genetics or archaeology or linguistics, which show strong, very frequent connections across maritime areas in the North Atlantic. But because traveling by boat is kind of an archaeologically invisible act, you don’t leave any footprints. So we have very little information about the voyages between these points.”

The sailing voyages made by Greer Jarrett during the research project. The image also shows the four possible Viking harbours identified by Jarrett.

The sailing voyages made by Greer Jarrett during the research project, as well as the four possible Viking harbors he identified. Credit: Greer Jarrett

Garrett and his crew used four or five different replica boats for their test voyages. Most were built by volunteers, enthusiasts, or students Jarrett had met during his considerable time in the field. They then sailed along the west coast of the Scandinavian Peninsula, a core area of Viking seafaring.

“These are reconstructions of traditional Norwegian boats from the 1800s and early 1900s,” said Jarrett. “My idea was, because of this really long-term continuity in traditional boat building practices, especially in Norway, it might be possible to use these later boats which have lots of similarities to try and work out the potentials of where people might have gotten out. It’s the idea of suggesting potentials based on practical experience to try and join those dots between the different evidence we have across the Viking world.”

That decision has led to some criticism from colleagues because of the enormous gap in time, but Jarrett defends his choice. “The Viking Age ends in the 11th century, and we’re talking about boats from 800 years later,” he said. “But the construction techniques and the way they are rigged and their general performance characteristics are similar enough. Because this is a project about voyages and not a project about boat building, it seemed like a defensible analogy.”

Seeking safe harbor

“On the long-range voyages, we worked in watches of four hours on and four hours off, and that is just about long enough to get some sleep on your off watch, but also just about short enough that you don’t get really, really, really cold, which is obviously a risk,” said Jarrett. “It was manageable, but we looked like penguins. I mean, we’re wearing six layers of wool at any time and sleeping all stacked together for warmth. But other times it’s really nice. The spring and the autumn in Scandinavia, there’s much more likelihood of high-pressure cycles, which means that it’s clearer and sunnier than in the summer itself.”

Nonetheless, there were some rough moments, such as when the mast spar holding up the mainsail snapped, forcing the crew to improvise and lash two oars together to hold the sail so they could continue their journey. It took several days to repair the boat so it could sail again. There was no safety boat following along in case the crew got into trouble, and no engine, although they did have a life raft, which the crew has yet to use.

Based on his sailing trials, Jarrett believes that the Vikings had no need for navigational tools like maps, a compass, or a sextant, relying instead on what he calls “mental maps”—or a “maritime cultural mindscape”—based on sailors’ memories and experiences passed down orally through generations. Those maps might also be informed by the myths linked to well-known coastal landmarks, such as skerries, small islets, or reefs.

“People had been moving by boat along the west coast of Scandinavia for a really, really, really long time, probably since the late Neolithic, if not earlier—thousands of years before the Viking age,” said Jarrett. “There are big trading networks in place beforehand, and that is reflected in the names, place names along the west coast. My primary argument is if you spend 3,000 years traveling up and down a coastline in which you can use the coast at all times for navigation, then it’s unnecessary to develop instrumentation.”

“Instruments are used when you are in a place out in the open sea that you don’t know,” Jarrett continued. “We definitely know they didn’t have compasses because those don’t arrive from China until the 1200s. There are these ideas about sunstones and sundials, or little sun compasses, which are entirely possible. But there’s no legitimate proof of either of them archaeologically yet. I may well be proved wrong if we find them at some point, but I don’t think they’re necessary for this at all.”

Based on the sailing trials, archaeological and documentary evidence of Viking Age maritime centers, and digital reconstructions of past sea levels. Jarrett was able develop a useful set of criteria for evaluating potential havens. For instance, the site should be reachable in low visibility, with land or sea marks that sailors could use as bearings; large enough to accommodate multiple vessels of at least the size of a fyring (which can house a crew of four to 10 people); provide good protection from sea swell and storm surges; and have access to fresh water, among other criteria. Four sites scored sufficiently high by those criteria to qualify as possible Viking havens.

The four sites are Smørhamn, located at the confluence of Oldersund and the Frøysjø, where an inn and trading post are known to have existed since at least the late 17th century; the archipelago of Sørøyane between Stad and Ålesund, near where the sea battle of Hjörungavágr was fought circa 986 CE; Bjørnsund, a number of small islands off the southwestern tip of Hustadvika; and the island of Storfosna, which appears on 16th and 17th century charts.

“I’m not saying, ‘This is where they went,'” said Jarrett. “I’m saying that, with these kinds of boats under these conditions, it would be possible to go to these places. And it’s much more difficult—not impossible, but much more difficult—to go to these other places or to sail in these other conditions.”

Pining for the fjords

The next step is for Jarrett and other archaeologists to hunt for evidence in support of his hypothesis. “Most of these sites have never been excavated,” said Jarrett. “There’s been a long assumption that these are landing places with the idea that you are dragging your boat ashore. I’m very opposed to that idea because these are two-and-a-half-ton boats, let alone the cargo. Unless you have a team of oxen and 20 people at your command, there is no way you’re getting them on the beach. I’m very convinced that these places have jetties and mooring posts likely preserved underwater. All of that organic material survives much better underwater than it does on land. So I think that’s very possible.”

They might also find smaller items suggestive of a thriving harbor community. “Whenever you go into land, you’ve got something that’s broken, so you need to do repairs,” said Jarrett. “So things like clink nails or piles of balustones or signs of smithing—the typical kind of things you’d use for repairing your ship, I think are possible to find.” Jarrett’s methodology might also prove useful for studying other seafaring communities. 

The practical experience of sailing the same seas as the Vikings naturally led to some surprising insights. “You are able to ask very different questions the minute you walk away from your desk and get on a boat,” said Jarrett. “I think it’s essential to do that because you think in new ways. In terms of the results themselves, the boats are extremely seaworthy crafts. When you get in them for the first time, you don’t think that, because they’re very, very light. They feel very flimsy, and they’re very low in the water compared to a modern sailing boat. So you feel really in touch with the wave, which is kind of scary. But because they’re so flexible and because of the way they’re rigged, they’re actually really stable, even in big waves.”

“We kept going out thinking, ‘Oh, this is maybe the limit of what this boat can tolerate,’ and then it would be fine, and we’d be, ‘Okay, let’s go a little bit in slightly bigger waves with slightly stronger wind,'” Jarrett continued. “So I think our comfort zones definitely visibly expanded during that period. And I had the chance to work with the same crews over three years. By the end of those three years, we were doing stuff that we would never have been able to do at the beginning.”

Another big difference from modern boats, Jarrett discovered, is that one cannot sail a traditional Viking craft alone. “It has to be a collaborative effort because of how you need a person at the front and the back of the boat basically at all times,” he said. “So developing the crew together and gaining not only skills, but also trust between us meant that we could do things in 2024 that seemed completely insane just a couple of years earlier. I cannot imagine what that is like if you have an entire lifetime of Viking sailors working together for 30 years. It must be an incredible way of creating social bonds.”

DOI: Journal of Archaeological Method and Theory, 2025. 10.1007/s10816-025-09708-6  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Sailing the fjords like the Vikings yields unexpected insights Read More »

how-a-grad-student-got-lhc-data-to-play-nice-with-quantum-interference

How a grad student got LHC data to play nice with quantum interference


New approach is already having an impact on the experiment’s plans for future work.

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

The ATLAS particle detector of the Large Hadron Collider (LHC) at the European Nuclear Research Center (CERN) in Geneva, Switzerland. Credit: EThamPhoto/Getty Images

Measurements at the Large Hadron Collider have been stymied by one of the most central phenomena of the quantum world. But now, a young researcher has championed a new method to solve the problem using deep neural networks.

The Large Hadron Collider is one of the biggest experiments in history, but it’s also one of the hardest to interpret. Unlike seeing an image of a star in a telescope, saying anything at all about the data that comes out of the LHC requires careful statistical modeling.

“If you gave me a theory [that] the Higgs boson is this way or that way, I think people imagine, ‘Hey, you built the experiment, you should be able to tell me what you’re going to see under various hypotheses!’” said Daniel Whiteson, a professor at the University of California, Irvine. “But we don’t.”

One challenge with interpreting LHC data is interference, a core implication of quantum mechanics. Interference allows two possible events to inhibit each other, weakening the likelihood of seeing the result of either. In the presence of interference, physicists needed to use a fuzzier statistical method to analyze data, losing the data’s full power and increasing its uncertainty.

However, a recent breakthrough suggests a different way to tackle the problem. The ATLAS collaboration, one of two groups studying proton collisions at the LHC, released two papers last December that describe new ways of exploring data from their detector. One describes how to use a machine learning technique called Neural Simulation-Based Inference to maximize the potential of particle physics data. The other demonstrates its effectiveness with the ultimate test: re-doing a previous analysis with the new technique and seeing dramatic improvement.

The papers are the culmination of a young researcher’s six-year quest to convince the collaboration of the value of the new technique. Its success is already having an impact on the experiment’s plans for future work.

Making sense out of fusing bosons

Each particle collision at the LHC involves many possible pathways in which different particles combine to give rise to the spray of debris that experimenters see. In 2017, David Rousseau at IJCLab in Orsay, a member of the ATLAS collaboration, asked one of his students, Aishik Ghosh, to improve his team’s ability to detect a specific pathway. That particular pathway is quite important since it’s used to measure properties of the Higgs boson, a particle (first measured in 2012) that helps explain the mass of all other fundamental particles.

It was a pretty big ask. “When a grad student gets started in ATLAS, they’re a tiny cog in a giant, well-oiled machine of 3,500 physicists, who all seem to know exactly what they’re doing,” said Ghosh.

The pathway Ghosh was asked to study occurs via several steps. First, the two colliding protons each emit a W boson, a particle associated with the weak nuclear force. These two bosons fuse together, changing their identity to form a Higgs boson. The Higgs boson then decays, forming a pair of Z bosons, another particle associated with the weak force. Finally, those Z bosons themselves each decay into a lepton, like an electron, and its antimatter partner, like a positron.

A Feynman diagram for the pathway studied by Aishik Ghosh. Credit: ATLAS

Measurements like the one Ghosh was studying are a key way of investigating the properties of the Higgs boson. By precisely measuring how long it takes the Higgs boson to decay, physicists could find evidence of it interacting with new, undiscovered particles that are too massive for the LHC to produce directly.

Ghosh started on the project, hoping to find a small improvement in the collaboration’s well-tested methods. Instead, he noticed a larger issue. The goal he was given, of detecting a single pathway by itself, didn’t actually make sense.

“I was doing that and I realized, ‘What am I doing?’ There’s no clear objective,” said Ghosh.

The problem was quantum interference.

How quantum histories interfere

One of the most famous demonstrations of the mysterious nature of quantum mechanics is called the double-slit experiment. In this demonstration, electrons are shot through a screen with two slits that allow them to pass through to a photographic plate on the other side. With one slit covered, the electrons form a pattern centered on the opening. The photographic plate lights up bright right across from the slit and dims further away from it.

With both slits open, you would expect the pattern to get brighter as more electrons reach the photographic plate. Instead, the effect varies. The two slits do not give rise to two nice bright peaks; instead, you see a rippling pattern in which some areas get brighter while others get dimmer, even though the dimmer areas should, in principle, be easier for electrons to reach.

The effect happens even if the electrons are shot at the screen one by one to stop them from influencing each other directly. It’s as if each electron carries with it two possible histories, one in which it goes through one slit and another where it goes through the other before both end up at the same place. These two histories interfere with each other so that some destinations become less likely instead of more likely.

Results of the double-slit experiment. Credit: Jordgette (CC BY-SA 3.0)

For electrons in the double-slit experiment, the two different histories are two different paths through space. For a measurement at the Large Hadron Collider, the histories are more abstract—paths that lead through transformations of fields. One history might be like the pathway Ghosh was asked to study, in which two W bosons fuse to form a Higgs boson before the Higgs boson splits into two Z bosons. But in another history, the two W bosons might fuse and immediately split into two Z bosons without ever producing a Higgs.

Both histories have the same beginning, with two W bosons, and the same end, with two Z bosons. And just as the two histories of electrons in the double-slit experiment can interfere, so can the two histories for these particles.

Another possible history for colliding particles at the Large Hadron Collider, which interferes with the measurement Ghosh was asked to do. Credit: ATLAS

That interference makes the effect of the Higgs boson much more challenging to spot. ATLAS scientists wanted to look for two pairs of electrons and positrons, which would provide evidence that two Z bosons were produced. They would classify their observations into two types: observations that are evidence for the signal they were looking for (that of a decaying Higgs boson) and observations of events that generate this pattern of particles without the Higgs boson acting as an intermediate (the latter are called the background). But the two types of observations, signal and background, interfere. With a stronger signal, corresponding to more Higgs bosons decaying, you might observe more pairs of electrons and positrons… but if these events interfere, you also might see those pairs disappear.

Learning to infer

In traditional approaches, those disappearances are hard to cope with, even when using methods that already incorporate machine learning.

One of the most common uses of machine learning is classification—for example, distinguishing between pictures of dogs and cats. You train the machine on pictures of cats and pictures of dogs, and it tells you, given a picture, which animal is the most likely match. Physicists at the LHC were already using this kind of classification method to characterize the products of collisions, but it functions much worse when interference is involved.

“If you have something that disappears, you don’t quite know what to train on,” said David Rousseau. “Usually, you’re training signal versus background, exactly like you’re training cats versus dogs. When there is something that disappears, you don’t see what you trained on.”

At first, Ghosh tried a few simple tricks, but as time went on, he realized he needed to make a more fundamental change. He reached out to others in the community and learned about a method called Neural Simulation-Based Inference, or NSBI.

In older approaches, people had trained machine learning models to classify observations into signal and background, using simulations of particle collisions to make the training data. Then they used that classification to infer the most likely value of a number, like the amount of time it takes a Higgs boson to decay, based on data from an actual experiment. Neural Simulation-Based Inference skips the classification and goes directly to the inference.

Instead of trying to classify observations into signal and background, NSBI uses simulations to teach an artificial neural network to guess a formula called a likelihood ratio. Someone using NSBI would run several simulations that describe different situations, such as letting the Higgs boson decay at different rates, and then check how many of each type of simulation yielded a specific observation. The fraction of these simulations with a certain decay rate would provide the likelihood ratio, a method for inferring which decay rate is more likely given experimental evidence. If the neural network is good at guessing this ratio, it will be good at finding how long the Higgs takes to decay.

Because NSBI doesn’t try to classify observations into different categories, it handles quantum interference more effectively. Instead of trying to find the Higgs based on a signal that disappears, it examines all the data, trying to guess which decay time is the most likely.

Ghosh tested the method, which showed promising results on test data, and presented the results at a conference in 2019. But if he was going to convince the ATLAS collaboration that the method was safe to use, he still had a lot of work ahead of him.

Shifting the weight on ATLAS’ shoulders

Experiments like ATLAS have high expectations attached to them. A collaboration of thousands of scientists, ATLAS needs to not only estimate the laws of physics but also have a clear idea of just how uncertain those estimates are. At the time, NSBI hadn’t been tested in that way.

“None of this has actually been used on data,” said Ghosh. “Nobody knew how to quantify the uncertainties. So you have a neural network that gives you a likelihood. You don’t know how good the likelihood is. Is it well-estimated? What if it’s wrongly estimated just in some weird corner? That would completely bias your results.”

Checking those corners was too big a job for a single PhD student and too complex to complete within a single PhD degree. Aishik would have to build a team, and he would need time to build that team. That’s tricky in the academic world, where students go on to short-term postdoc jobs with the expectation that they quickly publish new results to improve their CV for the next position.

“We’re usually looking to publish the next paper within two to three years—no time to overhaul our methods,” said Ghosh. Fortunately, Ghosh had support. He received his PhD alongside Rousseau and went to work with Daniel Whiteson, who encouraged him to pursue his ambitious project.

“I think it’s really important that postdocs learn to take those risks because that’s what science is,” Whiteson said.

Ghosh gathered his team. Another student of Rousseau’s, Arnaud Maury, worked to calibrate the machine’s confidence in its answers. A professor at the University of Massachusetts, Rafael Coelho Lopes de Sa, joined the project. His student Jay Sandesara would have a key role in getting the calculation to work at full scale on a computer cluster. IJCLab emeritus RD Schaffer and University of Liège professor Gilles Loupe provided cross-checks and advice.

The team wanted a clear demonstration that their method worked, so they took an unusual step. They took data that ATLAS had already analyzed and performed a full analysis using their method instead, showing that it could pass every check the collaboration could think of. They would publish two papers, one describing the method and the other giving the results of their upgraded analysis. Zach Marshall, who was the computing coordinator for ATLAS at the time, helped get the papers through, ensuring that they were vetted by experts in multiple areas.

“It was a very small subset of our community that had that overlap between this technical understanding and the physics analysis experience and understanding that were capable of really speaking to whether that paper was sufficient and intelligible and useful. So we really had to make sure that we engaged that little group of humans by name,” said Marshall.

The new method showed significant improvements, getting a much more precise result than the collaboration’s previous analysis. That improvement, and the thorough checks, persuaded ATLAS to use NSBI more broadly going forward. It will give them much more precision than they expected, using the Higgs boson to search for new particles and clarify our understanding of the quantum world. When ATLAS discusses its future plans, it makes projections of the precision it expects to reach in the future. But those plans are now being upended.

“One of the fun things about this method that Aishik pushed hard is each time it feels like now we do that projection—here’s how well we’ll do in 15 years—we absolutely crush those projections,” said Marshall. “So we are just now having to redo a set of projections because we matched our old projections for 15 years out already today. It’s a very fun problem to have.”

How a grad student got LHC data to play nice with quantum interference Read More »

psyche-keeps-its-date-with-an-asteroid,-but-now-it’s-running-in-backup-mode

Psyche keeps its date with an asteroid, but now it’s running in backup mode

The spacecraft, built by Maxar Space Systems, will operate its electric thrusters for the equivalent of three months between now and November to keep the mission on track for arrival at asteroid Psyche in 2029.

“Through comprehensive testing and analysis, the team narrowed down the potential causes to a valve that may have malfunctioned in the primary line,” NASA said in a statement Friday. “The switch to the identical backup propellant line in late May restored full functionality to the propulsion system.”

The next waypoint on Psyche’s voyage will be a flyby of Mars in May 2026. Officials expect Psyche to keep that date, which is critical for using Mars’ gravity to slingshot the spacecraft deeper into the Solar System, eventually reaching the asteroid belt about four years from now.

NASA’s Psyche spacecraft takes a spiral path to the asteroid Psyche, as depicted in this graphic that shows the path from above the plane of the planets, labeled with key milestones of the prime mission. Credit: NASA/JPL-Caltech

At Psyche, the spacecraft will enter orbit and progressively move closer to the asteroid, using a suite of sensors to map its surface, measure its shape, mass, and gravity field, and determine its elemental composition. Observations through telescopes suggest Psyche is roughly 140 miles (226 kilometers) in diameter, or about the width of Massachusetts. But it’s likely not spherical in shape. Scientists describe its shape as more akin to a potato.

Potatoes come in lots of shapes, and researchers won’t know exactly what Psyche looks like until NASA’s asteroid explorer arrives in 2029. Psyche will be the first metallic, or M-type, asteroid visited by any spacecraft, and scientists are eager to study an object that’s largely made of metals—probably iron, nickel, and perhaps some rarer elements instead of rocky minerals.

With the Psyche spacecraft’s plasma thrusters back in action, these goals of NASA’s billion-dollar science mission remain achievable.

“The mission team’s dedication and systematic approach to this investigation exemplifies the best of NASA engineering,” said Bob Mase, Psyche project manager at  JPL, in a statement. “Their thorough diagnosis and recovery, using the backup system, demonstrates the value of robust spacecraft design and exceptional teamwork.”

But there’s still a lingering concern whatever problem caused the valve to malfunction in the primary fuel line might also eventually affect the same kind of valve in the backup line.

“We are doing a lot of good proactive work around that possible issue,” wrote Lindy Elkins-Tanton, Psyche’s principal investigator at Arizona State University, in a post on X.

Psyche keeps its date with an asteroid, but now it’s running in backup mode Read More »

new-body-size-database-for-marine-animals-is-a-“library-of-life”

New body size database for marine animals is a “library of life”

The ocean runs on size

McClain officially launched MOBS as a passion project while on sabbatical in 2022 but he had been informally collecting data on body size for various marine groups for several years before that. So he had a small set of data already to kick off the project, incorporating it all into a single large database with a consistent set format and style.

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans Credit: Craig McClain

“One of the things that had prevented me from doing this before was the taxonomy issue,” said McClain. “Say you wanted to get the body size for all [species] of octopuses. That was not something that was very well known unless some taxonomist happened to publish [that data]. And that data was likely not up-to-date because new species are [constantly] being described.”

However, in the last five to ten years, the World Register of Marine Species (WoRMS) was established with the objective of cataloging all marine life, with taxonomy experts assigned to specific groups to determine valid new species, which are then added to the data set with a specific numerical code. McClain tied his own dataset to that same code, making it quite easy to update MOBS as new species are added to WoRMS. McClain and his team were also able to gather body size data from various museum collections.

The MOBS database focuses on body length (a linear measurement) as opposed to body mass. “Almost every taxonomic description of a new species has some sort of linear measurement,” said McClain. “For most organisms, it’s a length, maybe a width, and if you’re really lucky you might get a height. It’s very rare for anything to be weighed unless it’s an objective of the study. So that data simply doesn’t exist.”

While all mammals generally have similar density, “If you compare the density of a sea slug, a nudibranch, versus a jellyfish, even though they have the same masses, their carbon contents are much different,” he said. “And a one-meter worm that’s a cylinder and a one-meter sea urchin that’s a sphere are fundamentally different weights and different kinds of organisms.” One solution for the latter is to convert to volume to account for shape differences. Length-to-weight ratios can also differ substantially for different marine animal groups. That’s why McClain hopes to compile a separate database for length-to-weight conversions.

New body size database for marine animals is a “library of life” Read More »

how-a-data-center-company-uses-stranded-renewable-energy

How a data center company uses stranded renewable energy

“Decisions around where data centers get built have shifted dramatically over the last six months, with access to power now playing the most significant role in location scouting,” Joshi said. “The grid can’t keep pace with AI demands, so the industry is taking control with onsite power generation.”

Soluna, like other data center developers looking to rely on renewable energy, buys the excess power from wind, hydro, and solar plants that they can’t sell to the grid. By the end of the year, Soluna will have three facilities totaling 123 megawatts of capacity in Kentucky and Texas and seven projects in the works with upwards of 800 total megawatts.

Belizaire and I talked about how in Texas, where I report from, there’s plenty of curtailed energy from wind and solar farms because of the region’s transmission capacity. In West Texas, other data center developers are also taking advantage of the unused wind energy, far from major load centers like Dallas and Houston, by co-locating their giant warehouses full of advanced computers and high-powered cooling systems with the excess energy.

One data center developer using curtailed renewable power in Texas is IREN. The firm owns and operates facilities optimized for Bitcoin mining and AI. It developed a 7.5-gigawatt facility in Childress and broke ground on a 1.4-gigawatt data center in Sweetwater.

IREN purchases power through the state grid’s wholesale market during periods of oversupply, said Kent Draper, the company’s chief commercial officer, and reduces its consumption when prices are high. It’s able to do that by turning off its computers and minimizing power demand from its data centers.

But curtailment is an issue all over the world, Belizaire said, from Oklahoma, North Dakota, South Dakota, California, and Arizona in the US, to Northern Ireland, Germany, Portugal, and Australia.

“Anywhere where you have large utility-scale renewable development that’s been built out, you’re going to find it,” Belizaire said.

In a March analysis, the US Energy Information Administration reported that solar and wind power curtailments are increasing in California. In 2024, the grid operator for most of California curtailed 3.4 million megawatt hours of utility-scale wind and solar output, a 29 percent increase from the amount of electricity curtailed in 2023.

How a data center company uses stranded renewable energy Read More »

a-shark-scientist-reflects-on-jaws-at-50

A shark scientist reflects on Jaws at 50


We’re still afraid to go in the water

Ars chats with marine biologist David Shiffman about the film’s legacy—both good and bad.

Roy Scheider starred as Chief Martin Brody in the 1975 blockbuster Jaws. Credit: Universal Pictures

Today marks the 50th anniversary of Jaws, Steven Spielberg’s blockbuster horror movie based on the bestselling novel by Peter Benchley. We’re marking the occasion with a tribute to this classic film and its enduring impact on the popular perception of sharks, shark conservation efforts, and our culture at large.

(Many spoilers below.)

Jaws tells the story of Chief Martin Brody (Roy Scheider), the new police chief for Amity Island, a New England beach town and prime summer tourist attraction. But that thriving industry is threatened by a series of shark attacks, although the local mayor, Larry Vaughn (Murray Hamilton), initially dismisses the possibility, ridiculing the findings of visiting marine biologist Matt Hooper (Richard Dreyfuss). The attacks keep escalating and the body count grows, until the town hires a grizzled shark hunter named Quint (Robert Shaw) to hunt down and kill the great white shark, with the help of Brody and Hooper.

Benchley wrote his novel after reading about a sports fisherman named Frank Mundus, who captured a very large shark in 1964; in fact, the character of Quint is loosely based on Mundus. Benchley wrote an early draft of the screenplay, which underwent multiple revisions during production. In the end, he estimated that his contributions amounted to the basic storyline and the mechanics. Spielberg wasn’t the studio’s first choice for director; initially they hired Dick Richards, but Richards kept referring to the shark as a whale. Eventually, he was fired and replaced with the 26-year-old Spielberg, who had just finished his first feature film (The Sugarland Express).

Spielberg was given a $3.5 million shooting budget and a timeframe of 55 days for filming. However, the production was troubled from the start, largely due to the director’s insistence on shooting on location in Martha’s Vineyard; Jaws was the first major film to be shot on the ocean. Spielberg later admitted, “I was pretty naive about Mother Nature and the hubris of a filmmaker who thinks he can conquer the elements was foolhardy.” Unwanted boats kept drifting into the frame; cameras kept getting waterlogged; Carl Gottlieb (who played the local news editor Meadows) was nearly decapitated by a propeller; Dreyfuss nearly got stuck in the shark cage; and several actors suffered from seasickness. Frustrated crew members took to calling the movie “Flaws.”

A shark strikes

“duh-duh-duh-duh-duh-duh….” Universal Pictures

There were three pneumatically powered full-sized mechanical sharks built for the shoot, nicknamed “Bruce,” and they kept malfunctioning. The pneumatic hoses kept taking on seawater; the skin was made of neoprene foam, which soaked up water and became bloated; and one of the models kept getting tangled up in seaweed. In the end, Spielberg opted to shoot most of the early scenes without ever showing the actual shark, which actually heightened the tension and suspense, especially when combined with John Williams’ ominous theme music (“duh-duh-duh-duh-duh-duh…”).

In the end, shooting ran for 159 days, and the budget ballooned to $9 million. All the delays gave Spielberg and his writers (especially Gottlieb) extra time to refine the script, often just prior to filming the scenes. A lot of the dialogue was improvised by the actors. And it was all worth it in the end, because Jaws went on to become a major summer box office success. All told, it grossed $476 million globally across all its theatrical releases and won three Oscars, although it lost Best Picture to One Flew Over the Cuckoo’s Nest.

Jaws inspired many, many subsequent films, including Ridley Scott’s Alien in 1979, described in pitch meetings as “Jaws in space. Audience reactions were often extreme, with many people becoming fearful of swimming in the ocean for fear of sharks. And while the sequels were, shall we say, underwhelming, the original Jaws has stood the test of time. Ars spoke with marine biologist and shark conservationist David Shiffman, author of Why Sharks Matter, to discuss the film’s depiction of sharks and its enduring place in popular culture.

Ars Technica: Let’s start by talking about the enormous impact of the film, both good and bad, on the general public’s awareness of sharks.

David Shiffman: A lot of folks in both the marine science world and the ocean conservation communities have reported that Jaws in a lot of ways changed our world. It’s not that people used to think that sharks were cute, cuddly, adorable animals, and then after Jaws, they thought that they were bloodthirsty killing machines. They just weren’t on people’s minds. Fishermen knew about them, surfers thought about them, but that was about it. Most people who went to the beach didn’t pay much mind to what could be there. Jaws absolutely shattered that. My parents both reported that the summer that Jaws came out, they were afraid to go swimming in their community swimming pools.

No, really, the water’s fine!

“You knew.” The young boy’s mother (Lee Fierro) confronts Brody. Universal Pictures

David Shiffman: I have encountered people who were so scared that they were afraid to go in the bathtub. A lot of movies are very scary, but they don’t have that real-world impact. I love Jurassic Park, but I’m not afraid that a T. rex is going to eat me when I go into an outhouse, even though that’s about as realistic as what’s portrayed in Jaws. There’s something called the “Jaws Effect” in public policy literature, which is a way of measuring how fictional portrayals of real-world issues affect what citizens think about that issue and what policy preferences they support as a result. It’s fascinating how a fictional portrayal can do that, because I cannot stress enough: That is not what sharks look like or how they behave.

The movie also was the first time that a scientist was the hero. People half a generation above me have reported that seeing Richard Dreyfuss’ Hooper on the big screen as the one who saves the day changed their career trajectory. “You can be a scientist who studies fish. Cool. I want to do that.” In the time since Jaws came out, a lot of major changes have happened. One is that shark populations have declined globally by about 50 percent, and many species are now critically endangered.

And shark science has become much more professionalized. The American Elasmobranch Society—I’m on the board of directors—was founded in 1983, and now we have about 500 members in the US, Canada ,and Mexico. There have since been subsequent organizations founded in Australia and the Pacific Islands, Europe, South America, and a new one starting this year in Asia.

And then, from a cultural standpoint, we now have a whole genre of bad shark movies.

Ars Technica: Sharknado!

David Shiffman: Yes! Sharknado is one of the better of the bunch. Sitting on my desk here, we’ve got Sharkenstein, Raiders of the Lost Shark, and, of course, Shark Exorcist, all from the 2010s. I’ve been quoted as saying there’s two types of shark movie: There’s Jaws and there’s bad shark movies.

Ars Technica: Populations of the tiger shark, the great white, and couple of other species have declined so dramatically that many are on the verge of extinction. Is it just a coincidence that those declines started shortly after Jaws came out? 

David Shiffman: The short answer is not that Jaws caused this, but that perhaps Jaws made it easier for it to happen because people weren’t outraged the way they might’ve been if it happened to say, whales, whose populations were also declining around the same time. The number one threat to shark species as a whole is unsustainable overfishing practices. People are killing too many sharks. Sustainable fisheries for sharks can and do exist, and the US largely has done a good job with this, but around the world, it’s a bad scene.

“A whole genre of bad shark movies”

For instance, shark fin soup started to be a problem around the 1980s thanks to the economic boom in China and the emergence of a new middle class there. Shark fin soup is a traditional Chinese and Southeast Asian delicacy. It’s associated with the emperor and his court. It’s not shark meat that’s used. It’s the little skeletal fin rays from the fins that are basically a bland, noodle-like substance when they’re dried and boiled. The purpose of this was for people to say, “I have so much money that I can eat these incredibly rare delicacies.” That was not caused by Jaws. But perhaps it was allowed to happen because there was less public sympathy for sharks.

It’s worth noting that shark fin soup and the shark fin trade is no longer the biggest or only threat to sharks. It hasn’t been in about 20 years. Ironically, a lot of that has to do with Chinese government efforts not to save the ocean, but to crack down on public corruption. A lot of government officials used to throw extravagant banquets for their friends and family. The new Chinese government said, “We’re not doing that anymore.” That alone saved a lot of endangered species. It was not motivated by concern about the state of the ocean, but it had that effect.

Ars Technica: People have a tendency to think that sharks are simply brutal killing machines. Why are they so important to the ecosystem?

David Shiffman: The title of my book is Why Sharks Matter because sharks do matter and people don’t think about them that way. These are food chains that provide billions of humans with food, including some of the poorest humans on Earth. They provide tens of millions of humans with jobs. When those food chains are disrupted, that’s bad for coastal communities, bad for food security and livelihoods. If we want to have healthy ocean food chains, we need a healthy top of the food chain, because when you lose the top of the food chain, the whole thing can unravel in unpredictable, but often quite devastating ways.

 So sharks play important ecological roles by holding the food chain that we all depend on in place. They’re also not a significant threat to you and your family. More people in a typical year die from flower pots falling on their head when they walk down the street. More people in a typical year die falling off a cliff when they’re trying to take a selfie of the scenery behind them, than are killed by sharks. Any human death or injury is a tragedy, and I don’t want to minimize that. But when we’re talking about global-scale policy responses, the relative risk versus reward needs to be considered.

Ars Technica:  There’s a scene in Jaws where Hooper is talking about his personal theory: territoriality, the idea that this rogue great white came in and made this his personal territory and now he’ll just keep feeding until the food runs out. Is that a real scientific premise from the 1970s and how valid is it?

The hunt begins

The town hires grizzled shark hunter Quint (Robert Shaw) to kill the great white shark. Universal Pictures

David Shiffman: Rogue sharks are nonsense. It is nonsense that is still held by some kooks who are ostensibly in my field, but it is not supported by any evidence whatsoever. In all of recorded human history, there is proof that exactly one shark bit more than one human. That was the Sharm el-Sheikh attacks around Christmas in Egypt a few years ago. Generally speaking, a lot of times it’s hard to predict why wild animals do or don’t do anything. But if this was a behavior that was real, there would be evidence that it happens and there isn’t any, despite a lot of people looking.

Was it commonly believed in the 1970s? No. Did Peter Benchley make it up? No. It’s a thing in some animals for sure. In some neighborhoods, people will pick up gators and move them hundreds of miles away; the gators will move back to that exact same spot. I think the same thing has been shown with bears. Wolves certainly have a home range. But for sharks, it’s not a thing.

Ars Technica: Quint has a famous monologue about surviving the USS Indianapolis sinking and witnessing crew members being eaten by sharks. How historically accurate is that?. 

David Shiffman: We don’t really know how many of the people who were killed following the sinking of the Indianapolis were killed by sharks. Certainly, firsthand accounts report that sharks were present. But those people were in the water because they were on a boat that exploded after being hit by a torpedo. That is not good for your health. So a lot of those people were either mortally wounded or killed by that initial explosion, and then perhaps were scavenged by sharks. Those are also people who are in the water bleeding, making a lot of noise. That’s an incredible scene in the movie. But the deaths Quint attributes to sharks is more people than have been reliably documented as killed by sharks in the history of the world ever.

Ars Technica: How accurate is Jaws in terms of how and why sharks attack humans? For instance, someone says that people splashing in the water mimics what sharks want to hunt. 

David Shiffman: Anyone who tells you they know exactly why a wild animal does or does not do something is someone who you should be a little skeptical of. But a leading theory, which I think makes sense, is this idea of mistaken identity. Some of the people who are most commonly bitten by sharks, though it’s still astronomically rare, are surfers. These are people who are cutting through the water with a silhouette that resembles a seal, wearing black neoprene, which is modeled after seal blubber. Sharks have been patrolling the ocean since before there were trees on land, and it’s only in the last hundred years or so that they’ve had to wonder, is that my preferred prey, or is it a human using technology to mimic my preferred prey for recreational purposes?

If you’ve been in the ocean, there’s been a shark not that far from you, and it knew you were there, and you probably had no idea it was there and had a pleasant day in the water. The sharks that do bite people, they take a little bite and they go, what is that? And swim away. That can be real bad if it hits a major artery or if you’re far from shore. Again, I don’t want to minimize the real harm. But it is not a shark hunting you because it has a taste for human flesh. They don’t have hands. They explore their environment with their mouths and most things in their environment they can eat.

I think Mythbusters tested fish blood versus mammal blood versus chicken blood, I think. And the sharks were attracted to fish blood and had no reaction to the others. So these are animals that are very, very, very well adapted for environmental conditions that in some cases don’t really exist anymore.

Man vs. great white

Brody fights off an increasingly aggressive great white. Universal Pictures

With humans, most of the time, what happens is an immediate bite, and then they swim away. With seals or large prey, they’ll often hit it really hard from below, sometimes knocking it completely out of the water. Or if they’re hunting whales or something that they can’t fit in their mouth, they just take a huge bite and swim away. With fish, they swallow them whole to the extent possible. Sometimes there’s a shaking motion to snap a neck or whatever. You see that with some land predators, too. It’s nothing like what’s seen there—but what an awesome scene.

Ars Technica: What is your favorite scene in Jaws and the one that makes you cringe the most?

David Shiffman: Oh, man. It’s really a great movie, and it holds up well. It was hailed as revolutionary at the time because you hardly ever see the shark. But the reason they did that was because the model of the shark that they built kept breaking. So they decided, let’s just shoot it from the shark’s eye view and save money and annoyance. I love the scene when Hooper realizes that the tiger shark that they’ve caught is obviously not the right species and the reaction that people have to that—just this idea that science and expertise can be used to solve problems. Whenever a shark bites someone, there are people who go out and kill any shark they can find and think that they’re helping.

One of my favorite professional experiences is the American Alasdair Rank Society conference. One year it was in Austin, Texas, near the original Alamo Drafthouse. Coincidentally, while we were there, the cinema held a “Jaws on the Water” event. They had a giant projector screen, and we were sitting in a lake in inner tubes while there were scuba divers in the water messing with us from below. I did that with 75 professional shark scientists. It was absolutely amazing. It helped knowing that it was a lake.

Ars Technica: If you wanted to make another really good shark movie, what would that look like today? 

David Shiffman: I often say that there are now three main movie plots: a man goes on a quest, a stranger comes to town, or there’s a shark somewhere you would not expect a shark to be. It depends if you want to make a movie that’s actually good, or one of the more fun “bad” movies like Sharknado or Sharktopus or Avalanche Sharks—the tagline of which is “snow is just frozen water.” These movies are just off the rails and absolutely incredible. The ones that don’t take themselves too seriously and are in on the joke tend to be very fun. But then you get movies like Netflix’s Under Paris (2024); they absolutely thought they were making a good movie and took themselves very seriously, and it was painful to watch.

I would love to see actual science and conservation portrayed. I’d love to see species that are not typically found in these movies featured. The Sharknado series actually did a great job of this because they talked with me and other scientists after the success of the first one. Sharknado II is thanked in my PhD dissertation, because they funded one of my chapters. In that movie, it’s not just great whites and tiger sharks and bull sharks. They have a whale shark that falls out of the sky and hits someone. They have a cookie-cutter shark that falls out of the sky and burrows through someone’s leg. There’s a lot of shark diversity out there, and it’d be nice to get that featured more.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A shark scientist reflects on Jaws at 50 Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

mit-student-prints-ai-polymer-masks-to-restore-paintings-in-hours

MIT student prints AI polymer masks to restore paintings in hours

MIT graduate student Alex Kachkine once spent nine months meticulously restoring a damaged baroque Italian painting, which left him plenty of time to wonder if technology could speed things up. Last week, MIT News announced his solution: a technique that uses AI-generated polymer films to physically restore damaged paintings in hours rather than months. The research appears in Nature.

Kachkine’s method works by printing a transparent “mask” containing thousands of precisely color-matched regions that conservators can apply directly to an original artwork. Unlike traditional restoration, which permanently alters the painting, these masks can reportedly be removed whenever needed. So it’s a reversible process that does not permanently change a painting.

“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine told MIT News. “And that’s never really been possible in conservation before.”

Figure 1 from the paper.

Figure 1 from the paper. Credit: MIT

Nature reports that up to 70 percent of institutional art collections remain hidden from public view due to damage—a large amount of cultural heritage sitting unseen in storage. Traditional restoration methods, where conservators painstakingly fill damaged areas one at a time while mixing exact color matches for each region, can take weeks to decades for a single painting. It’s skilled work that requires both artistic talent and deep technical knowledge, but there simply aren’t enough conservators to tackle the backlog.

The mechanical engineering student conceived the idea during a 2021 cross-country drive to MIT, when gallery visits revealed how much art remains hidden due to damage and restoration backlogs. As someone who restores paintings as a hobby, he understood both the problem and the potential for a technological solution.

To demonstrate his method, Kachkine chose a challenging test case: a 15th-century oil painting requiring repairs in 5,612 separate regions. An AI model identified damage patterns and generated 57,314 different colors to match the original work. The entire restoration process reportedly took 3.5 hours—about 66 times faster than traditional hand-painting methods.

A handout photo of Alex Kachkine, who developed the AI printed film technique.

Alex Kachkine, who developed the AI-printed film technique. Credit: MIT

Notably, Kachkine avoided using generative AI models like Stable Diffusion or the “full-area application” of generative adversarial networks (GANs) for the digital restoration step. According to the Nature paper, these models cause “spatial distortion” that would prevent proper alignment between the restored image and the damaged original.

MIT student prints AI polymer masks to restore paintings in hours Read More »

spacex’s-next-starship-just-blew-up-on-its-test-stand-in-south-texas

SpaceX’s next Starship just blew up on its test stand in South Texas


SpaceX had high hopes for Starship in 2025, but it’s been one setback after another.

A fireball erupts around SpaceX’s Starship rocket in South Texas late Wednesday night. Credit: LabPadre

SpaceX’s next Starship rocket exploded during a ground test in South Texas late Wednesday, dealing another blow to a program already struggling to overcome three consecutive failures in recent months.

The late-night explosion at SpaceX’s rocket development complex in Starbase, Texas, destroyed the bullet-shaped upper stage that was slated to launch on the next Starship test flight. The powerful blast set off fires around SpaceX’s Massey’s Test Site, located a few miles from the company’s Starship factory and launch pads.

Live streaming video from NASASpaceflight.com and LabPadremedia organizations with cameras positioned around Starbase—showed the 15-story-tall rocket burst into flames shortly after 11: 00 pm local time (12: 00 am EDT; 04: 00 UTC). Local residents as far as 30 miles away reported seeing and feeling the blast.

SpaceX confirmed the Starship, numbered Ship 36 in the company’s inventory, “experienced a major anomaly” on a test stand as the vehicle prepared to ignite its six Raptor engines for a static fire test. These hold-down test-firings are typically one of the final milestones in a Starship launch campaign before SpaceX moves the rocket to the launch pad.

The explosion occurred as SpaceX finished up loading super-cold methane and liquid oxygen propellants into Starship in preparation for the static fire test. The company said the area around the test site was evacuated of all personnel, and everyone was safe and accounted for after the incident. Firefighters from the Brownsville Fire Department were dispatched to the scene.

“Our Starbase team is actively working to safe the test site and the immediate surrounding area in conjunction with local officials,” SpaceX posted on X. “There are no hazards to residents in surrounding communities, and we ask that individuals do not attempt to approach the area while safing operations continue.”

Picking up the pieces

Earlier Wednesday, just hours before the late-night explosion at Starbase, an advisory released by the Federal Aviation Administration showed SpaceX had set June 29 as a tentative launch date for the next Starship test flight. That won’t happen now, and it’s anyone’s guess when SpaceX will have another Starship ready to fly.

Massey’s Test Site, named for a gun range that once occupied the property, is situated on a bend in the Rio Grande River, just a few hundred feet from the Mexican border. The test site is currently the only place where SpaceX can put Starships through proof testing and static fire tests before declaring the rockets are ready to fly.

The extent of the damage to ground equipment at Massey’s was not immediately clear, so it’s too soon to say how long the test site will be out of commission. For now, though, the explosion leaves SpaceX without a facility to support preflight testing on Starships.

The videos embedded below come from NASASpaceflight.com and LabPadre, showing multiple angles of the Starship blast.

The explosion at Massey’s is a reminder of SpaceX’s rocky path to get Starship to this point in its development. In 2020 and 2021, SpaceX lost several Starship prototypes to problems during ground and flight testing. The visual of Ship 36 going up in flames harkens back to those previous explosions, along with the fiery demise of a Falcon 9 rocket on its launch pad in 2016 under circumstances similar to Wednesday night’s incident.

SpaceX has now launched nine full-scale Starship rockets since April 2023, and before the explosion, the company hoped to launch the 10th test flight later this month. Starship’s track record has been dreadful so far this year, with the rocket’s three most recent test flights ending prematurely. These setbacks followed a triumphant 2024, when SpaceX made clear progress on each successive Starship suborbital test flight, culminating in the first catch of the rocket’s massive Super Heavy booster with giant robotic arms on the launch pad tower.

Stacked together, the Super Heavy booster stage and Starship upper stage stand more than 400 feet tall, creating the largest rocket ever built. SpaceX has already flown a reused Super Heavy booster, and the company has designed Starship itself to be recoverable and reusable, too.

After last year’s accomplishments, SpaceX appeared to be on track for a full orbital flight, an attempt to catch and recover Starship itself, and an important in-space refueling demonstration in 2025. The refueling demo has officially slipped into 2026, and it’s questionable whether SpaceX will make enough progress in the coming months to attempt recovery of a ship before the end of this year.

A Super Heavy booster and Starship upper stage are seen in March at SpaceX’s launch pad in South Texas, before the ship was stacked atop the booster for flight. The Super Heavy booster for the next Starship flight completed its static fire test earlier this month. Credit: Brandon Bell/Getty Images

Ambition meets reality

SpaceX debuted an upgraded Starship design, called Version 2 or Block 2, on a test flight in January. It’s been one setback after another since then.

The new Starship design is slightly taller than the version of Starship that SpaceX flew in 2023 and 2024. It has an improved heat shield to better withstand the extreme heat of atmospheric reentry. SpaceX also installed a new fuel feed line system to route methane fuel to the ship’s Raptor engines, and an improved propulsion avionics module controlling the vehicle’s valves and reading sensors.

Despite—or perhaps because ofall of these changes for Starship Version 2, SpaceX has been unable to replicate the successes it achieved with Starship in the last two years. Ships launched on test flights in January and March spun out of control minutes after liftoff, scattering debris over the sea, and in at least one case, onto a car in the Turks and Caicos Islands.

SpaceX engineers concluded the January failure was likely caused by intense vibrations that triggered fuel leaks and fires in the ship’s engine compartment, causing an early shutdown of the rocket’s engines. Engineers said the vibrations were likely in resonance with the vehicle’s natural frequency, intensifying the shaking beyond the levels SpaceX predicted.

The March flight failed in similar fashion, but SpaceX’s investigators determined the most probable root cause was a hardware failure in one of the ship’s engines, a different failure mode than two months before.

During SpaceX’s most recent Starship test flight last month, the rocket completed the ascent phase of the mission as planned, seemingly overcoming the problems that plagued the prior two launches. But soon after the Raptor engines shut down, a fuel leak caused the ship to begin tumbling in space, preventing the vehicle from completing a guided reentry to test the performance of new heat shield materials.

File photo of a Starship static fire in May at Massey’s Test Site.

SpaceX is working on a third-generation Starship design, called Version 3, that the company says could be ready to fly by the end of this year. The upgraded Starship Version 3 design will be able to lift heavier cargo—up to 200 metric tonsinto orbit thanks to larger propellant tanks and more powerful Raptor engines. Version 3 will also have the ability to refuel in low-Earth orbit.

Version 3 will presumably have permanent fixes to the problems currently slowing SpaceX’s pace of Starship development. And there are myriad issues for SpaceX’s engineers to solve, from engine reliability and the ship’s resonant frequency, to beefing up the ship’s heat shield and fixing its balky payload bay door.

Once officials solve these problems, it will be time for SpaceX to bring a Starship from low-Earth orbit back to the ground. Then, there’s more cool stuff on the books, like orbital refueling and missions to the Moon in partnership with NASA’s Artemis program. NASA has contracts worth more than $4 billion with SpaceX to develop a human-rated Starship that can land astronauts on the Moon and launch them safely back into space.

The Trump administration’s proposed budget for NASA would cancel the Artemis program’s ultra-expensive Space Launch System rocket and Orion crew capsule after two more flights, leaving commercial heavy-lifters to take over launching astronauts from the Earth to the Moon. SpaceX’s Starship, already on contract with NASA as a human-rated lander, may eventually win more government contracts to fill the role of SLS and Orion under Trump’s proposed budget. Other rockets, such as Blue Origin’s New Glenn, are also well-positioned to play a larger role in human space exploration.

NASA’s official schedule for the first Artemis crew landing on the Moon puts the mission some time in 2027, using SLS and Orion to transport astronauts out to the vicinity of the Moon to meet up with SpaceX’s Starship lunar lander. After that mission, known as Artemis III, NASA would pivot to using commercial rockets from Elon Musk’s SpaceX and Jeff Bezos’ Blue Origin to replace the Space Launch System.

Meanwhile, SpaceX’s founder and CEO has his sights set on Mars. Last month, Musk told his employees he wants to launch the first Starships toward the Red Planet in late 2026, when the positions of Earth and Mars in the Solar System make a direct journey possible. Optimistically, he would like to send people to Mars on Starships beginning in 2028.

All of these missions are predicated on SpaceX mastering routine Starship launch operations, rapid reuse of the ship and booster, and cryogenic refueling in orbit, along with adapting systems such as life support, communications, and deep space navigation for an interplanetary journey.

The to-do list is long for SpaceX’s Starship program—too long for Mars landings to seem realistic any time in the next few years. NASA’s schedule for the Artemis III lunar landing mission in 2027 is also tight, and not only because of Starship’s delays. The development of new spacesuits for astronauts to wear on the Moon may also put the Artemis III schedule at risk. NASA’s SLS rocket and Orion spacecraft have had significant delays throughout their history, so it’s not a sure thing they will be ready in 2027.

While it’s too soon to know the precise impact of Wednesday night’s explosion, we can say with some confidence that the chances of Starship meeting these audacious schedules are lower today than they were yesterday.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX’s next Starship just blew up on its test stand in South Texas Read More »

new-dating-for-white-sands-footprints-confirms-controversial-theory

New dating for White Sands footprints confirms controversial theory

Some of the sediment layers contained the remains of ancient grass seeds mixed with the sediment. Bennett and his colleagues radiocarbon-dated seeds from the layer just below the oldest footprints and the layer just above the most recent ones. According to those 2021 results, the oldest footprints were made sometime after 23,000 years ago; the most recent ones were made sometime before 21,000 years ago.

At that time, the northern half of the continent was several kilometers below massive sheets of ice. The existence of 23,000-year-old footprints could only mean that people were already living in what’s now New Mexico before the ice sheets sealed off the southern half of the continent from the rest of the world for the next few thousand years.

Ancient human footprints found in situ at at White Sands National Park in New Mexico.

Ancient human footprints found in situ at White Sands National Park in New Mexico. Credit: Jeffrey S. Pigati et al., 2023

Other researchers were skeptical of those results, pointing out that the aquatic plants (Ruppia cirrhosa) analyzed were prone to absorbing the ancient carbon in groundwater, which could have skewed the findings and made the footprints seem older than they actually were. And the pollen samples weren’t taken from the same sediment layers as the footprints.

So the same team followed up by radiocarbon-dating pollen sampled from the same layers as some of the footprints—those that weren’t too thin for sampling. This pollen came from pine, spruce, and fir trees, i.e., terrestrial plants, thereby addressing the issue of groundwater carbon seeping into samples. They also analyzed quartz grains taken from clay just above the lowest layer of footprints using a different method, optically stimulated luminescence dating. They published those findings in 2023, which agreed with their earlier estimate.

New dating for White Sands footprints confirms controversial theory Read More »