Science

psyche-keeps-its-date-with-an-asteroid,-but-now-it’s-running-in-backup-mode

Psyche keeps its date with an asteroid, but now it’s running in backup mode

The spacecraft, built by Maxar Space Systems, will operate its electric thrusters for the equivalent of three months between now and November to keep the mission on track for arrival at asteroid Psyche in 2029.

“Through comprehensive testing and analysis, the team narrowed down the potential causes to a valve that may have malfunctioned in the primary line,” NASA said in a statement Friday. “The switch to the identical backup propellant line in late May restored full functionality to the propulsion system.”

The next waypoint on Psyche’s voyage will be a flyby of Mars in May 2026. Officials expect Psyche to keep that date, which is critical for using Mars’ gravity to slingshot the spacecraft deeper into the Solar System, eventually reaching the asteroid belt about four years from now.

NASA’s Psyche spacecraft takes a spiral path to the asteroid Psyche, as depicted in this graphic that shows the path from above the plane of the planets, labeled with key milestones of the prime mission. Credit: NASA/JPL-Caltech

At Psyche, the spacecraft will enter orbit and progressively move closer to the asteroid, using a suite of sensors to map its surface, measure its shape, mass, and gravity field, and determine its elemental composition. Observations through telescopes suggest Psyche is roughly 140 miles (226 kilometers) in diameter, or about the width of Massachusetts. But it’s likely not spherical in shape. Scientists describe its shape as more akin to a potato.

Potatoes come in lots of shapes, and researchers won’t know exactly what Psyche looks like until NASA’s asteroid explorer arrives in 2029. Psyche will be the first metallic, or M-type, asteroid visited by any spacecraft, and scientists are eager to study an object that’s largely made of metals—probably iron, nickel, and perhaps some rarer elements instead of rocky minerals.

With the Psyche spacecraft’s plasma thrusters back in action, these goals of NASA’s billion-dollar science mission remain achievable.

“The mission team’s dedication and systematic approach to this investigation exemplifies the best of NASA engineering,” said Bob Mase, Psyche project manager at  JPL, in a statement. “Their thorough diagnosis and recovery, using the backup system, demonstrates the value of robust spacecraft design and exceptional teamwork.”

But there’s still a lingering concern whatever problem caused the valve to malfunction in the primary fuel line might also eventually affect the same kind of valve in the backup line.

“We are doing a lot of good proactive work around that possible issue,” wrote Lindy Elkins-Tanton, Psyche’s principal investigator at Arizona State University, in a post on X.

Psyche keeps its date with an asteroid, but now it’s running in backup mode Read More »

new-body-size-database-for-marine-animals-is-a-“library-of-life”

New body size database for marine animals is a “library of life”

The ocean runs on size

McClain officially launched MOBS as a passion project while on sabbatical in 2022 but he had been informally collecting data on body size for various marine groups for several years before that. So he had a small set of data already to kick off the project, incorporating it all into a single large database with a consistent set format and style.

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans Credit: Craig McClain

“One of the things that had prevented me from doing this before was the taxonomy issue,” said McClain. “Say you wanted to get the body size for all [species] of octopuses. That was not something that was very well known unless some taxonomist happened to publish [that data]. And that data was likely not up-to-date because new species are [constantly] being described.”

However, in the last five to ten years, the World Register of Marine Species (WoRMS) was established with the objective of cataloging all marine life, with taxonomy experts assigned to specific groups to determine valid new species, which are then added to the data set with a specific numerical code. McClain tied his own dataset to that same code, making it quite easy to update MOBS as new species are added to WoRMS. McClain and his team were also able to gather body size data from various museum collections.

The MOBS database focuses on body length (a linear measurement) as opposed to body mass. “Almost every taxonomic description of a new species has some sort of linear measurement,” said McClain. “For most organisms, it’s a length, maybe a width, and if you’re really lucky you might get a height. It’s very rare for anything to be weighed unless it’s an objective of the study. So that data simply doesn’t exist.”

While all mammals generally have similar density, “If you compare the density of a sea slug, a nudibranch, versus a jellyfish, even though they have the same masses, their carbon contents are much different,” he said. “And a one-meter worm that’s a cylinder and a one-meter sea urchin that’s a sphere are fundamentally different weights and different kinds of organisms.” One solution for the latter is to convert to volume to account for shape differences. Length-to-weight ratios can also differ substantially for different marine animal groups. That’s why McClain hopes to compile a separate database for length-to-weight conversions.

New body size database for marine animals is a “library of life” Read More »

how-a-data-center-company-uses-stranded-renewable-energy

How a data center company uses stranded renewable energy

“Decisions around where data centers get built have shifted dramatically over the last six months, with access to power now playing the most significant role in location scouting,” Joshi said. “The grid can’t keep pace with AI demands, so the industry is taking control with onsite power generation.”

Soluna, like other data center developers looking to rely on renewable energy, buys the excess power from wind, hydro, and solar plants that they can’t sell to the grid. By the end of the year, Soluna will have three facilities totaling 123 megawatts of capacity in Kentucky and Texas and seven projects in the works with upwards of 800 total megawatts.

Belizaire and I talked about how in Texas, where I report from, there’s plenty of curtailed energy from wind and solar farms because of the region’s transmission capacity. In West Texas, other data center developers are also taking advantage of the unused wind energy, far from major load centers like Dallas and Houston, by co-locating their giant warehouses full of advanced computers and high-powered cooling systems with the excess energy.

One data center developer using curtailed renewable power in Texas is IREN. The firm owns and operates facilities optimized for Bitcoin mining and AI. It developed a 7.5-gigawatt facility in Childress and broke ground on a 1.4-gigawatt data center in Sweetwater.

IREN purchases power through the state grid’s wholesale market during periods of oversupply, said Kent Draper, the company’s chief commercial officer, and reduces its consumption when prices are high. It’s able to do that by turning off its computers and minimizing power demand from its data centers.

But curtailment is an issue all over the world, Belizaire said, from Oklahoma, North Dakota, South Dakota, California, and Arizona in the US, to Northern Ireland, Germany, Portugal, and Australia.

“Anywhere where you have large utility-scale renewable development that’s been built out, you’re going to find it,” Belizaire said.

In a March analysis, the US Energy Information Administration reported that solar and wind power curtailments are increasing in California. In 2024, the grid operator for most of California curtailed 3.4 million megawatt hours of utility-scale wind and solar output, a 29 percent increase from the amount of electricity curtailed in 2023.

How a data center company uses stranded renewable energy Read More »

a-shark-scientist-reflects-on-jaws-at-50

A shark scientist reflects on Jaws at 50


We’re still afraid to go in the water

Ars chats with marine biologist David Shiffman about the film’s legacy—both good and bad.

Roy Scheider starred as Chief Martin Brody in the 1975 blockbuster Jaws. Credit: Universal Pictures

Today marks the 50th anniversary of Jaws, Steven Spielberg’s blockbuster horror movie based on the bestselling novel by Peter Benchley. We’re marking the occasion with a tribute to this classic film and its enduring impact on the popular perception of sharks, shark conservation efforts, and our culture at large.

(Many spoilers below.)

Jaws tells the story of Chief Martin Brody (Roy Scheider), the new police chief for Amity Island, a New England beach town and prime summer tourist attraction. But that thriving industry is threatened by a series of shark attacks, although the local mayor, Larry Vaughn (Murray Hamilton), initially dismisses the possibility, ridiculing the findings of visiting marine biologist Matt Hooper (Richard Dreyfuss). The attacks keep escalating and the body count grows, until the town hires a grizzled shark hunter named Quint (Robert Shaw) to hunt down and kill the great white shark, with the help of Brody and Hooper.

Benchley wrote his novel after reading about a sports fisherman named Frank Mundus, who captured a very large shark in 1964; in fact, the character of Quint is loosely based on Mundus. Benchley wrote an early draft of the screenplay, which underwent multiple revisions during production. In the end, he estimated that his contributions amounted to the basic storyline and the mechanics. Spielberg wasn’t the studio’s first choice for director; initially they hired Dick Richards, but Richards kept referring to the shark as a whale. Eventually, he was fired and replaced with the 26-year-old Spielberg, who had just finished his first feature film (The Sugarland Express).

Spielberg was given a $3.5 million shooting budget and a timeframe of 55 days for filming. However, the production was troubled from the start, largely due to the director’s insistence on shooting on location in Martha’s Vineyard; Jaws was the first major film to be shot on the ocean. Spielberg later admitted, “I was pretty naive about Mother Nature and the hubris of a filmmaker who thinks he can conquer the elements was foolhardy.” Unwanted boats kept drifting into the frame; cameras kept getting waterlogged; Carl Gottlieb (who played the local news editor Meadows) was nearly decapitated by a propeller; Dreyfuss nearly got stuck in the shark cage; and several actors suffered from seasickness. Frustrated crew members took to calling the movie “Flaws.”

A shark strikes

“duh-duh-duh-duh-duh-duh….” Universal Pictures

There were three pneumatically powered full-sized mechanical sharks built for the shoot, nicknamed “Bruce,” and they kept malfunctioning. The pneumatic hoses kept taking on seawater; the skin was made of neoprene foam, which soaked up water and became bloated; and one of the models kept getting tangled up in seaweed. In the end, Spielberg opted to shoot most of the early scenes without ever showing the actual shark, which actually heightened the tension and suspense, especially when combined with John Williams’ ominous theme music (“duh-duh-duh-duh-duh-duh…”).

In the end, shooting ran for 159 days, and the budget ballooned to $9 million. All the delays gave Spielberg and his writers (especially Gottlieb) extra time to refine the script, often just prior to filming the scenes. A lot of the dialogue was improvised by the actors. And it was all worth it in the end, because Jaws went on to become a major summer box office success. All told, it grossed $476 million globally across all its theatrical releases and won three Oscars, although it lost Best Picture to One Flew Over the Cuckoo’s Nest.

Jaws inspired many, many subsequent films, including Ridley Scott’s Alien in 1979, described in pitch meetings as “Jaws in space. Audience reactions were often extreme, with many people becoming fearful of swimming in the ocean for fear of sharks. And while the sequels were, shall we say, underwhelming, the original Jaws has stood the test of time. Ars spoke with marine biologist and shark conservationist David Shiffman, author of Why Sharks Matter, to discuss the film’s depiction of sharks and its enduring place in popular culture.

Ars Technica: Let’s start by talking about the enormous impact of the film, both good and bad, on the general public’s awareness of sharks.

David Shiffman: A lot of folks in both the marine science world and the ocean conservation communities have reported that Jaws in a lot of ways changed our world. It’s not that people used to think that sharks were cute, cuddly, adorable animals, and then after Jaws, they thought that they were bloodthirsty killing machines. They just weren’t on people’s minds. Fishermen knew about them, surfers thought about them, but that was about it. Most people who went to the beach didn’t pay much mind to what could be there. Jaws absolutely shattered that. My parents both reported that the summer that Jaws came out, they were afraid to go swimming in their community swimming pools.

No, really, the water’s fine!

“You knew.” The young boy’s mother (Lee Fierro) confronts Brody. Universal Pictures

David Shiffman: I have encountered people who were so scared that they were afraid to go in the bathtub. A lot of movies are very scary, but they don’t have that real-world impact. I love Jurassic Park, but I’m not afraid that a T. rex is going to eat me when I go into an outhouse, even though that’s about as realistic as what’s portrayed in Jaws. There’s something called the “Jaws Effect” in public policy literature, which is a way of measuring how fictional portrayals of real-world issues affect what citizens think about that issue and what policy preferences they support as a result. It’s fascinating how a fictional portrayal can do that, because I cannot stress enough: That is not what sharks look like or how they behave.

The movie also was the first time that a scientist was the hero. People half a generation above me have reported that seeing Richard Dreyfuss’ Hooper on the big screen as the one who saves the day changed their career trajectory. “You can be a scientist who studies fish. Cool. I want to do that.” In the time since Jaws came out, a lot of major changes have happened. One is that shark populations have declined globally by about 50 percent, and many species are now critically endangered.

And shark science has become much more professionalized. The American Elasmobranch Society—I’m on the board of directors—was founded in 1983, and now we have about 500 members in the US, Canada ,and Mexico. There have since been subsequent organizations founded in Australia and the Pacific Islands, Europe, South America, and a new one starting this year in Asia.

And then, from a cultural standpoint, we now have a whole genre of bad shark movies.

Ars Technica: Sharknado!

David Shiffman: Yes! Sharknado is one of the better of the bunch. Sitting on my desk here, we’ve got Sharkenstein, Raiders of the Lost Shark, and, of course, Shark Exorcist, all from the 2010s. I’ve been quoted as saying there’s two types of shark movie: There’s Jaws and there’s bad shark movies.

Ars Technica: Populations of the tiger shark, the great white, and couple of other species have declined so dramatically that many are on the verge of extinction. Is it just a coincidence that those declines started shortly after Jaws came out? 

David Shiffman: The short answer is not that Jaws caused this, but that perhaps Jaws made it easier for it to happen because people weren’t outraged the way they might’ve been if it happened to say, whales, whose populations were also declining around the same time. The number one threat to shark species as a whole is unsustainable overfishing practices. People are killing too many sharks. Sustainable fisheries for sharks can and do exist, and the US largely has done a good job with this, but around the world, it’s a bad scene.

“A whole genre of bad shark movies”

For instance, shark fin soup started to be a problem around the 1980s thanks to the economic boom in China and the emergence of a new middle class there. Shark fin soup is a traditional Chinese and Southeast Asian delicacy. It’s associated with the emperor and his court. It’s not shark meat that’s used. It’s the little skeletal fin rays from the fins that are basically a bland, noodle-like substance when they’re dried and boiled. The purpose of this was for people to say, “I have so much money that I can eat these incredibly rare delicacies.” That was not caused by Jaws. But perhaps it was allowed to happen because there was less public sympathy for sharks.

It’s worth noting that shark fin soup and the shark fin trade is no longer the biggest or only threat to sharks. It hasn’t been in about 20 years. Ironically, a lot of that has to do with Chinese government efforts not to save the ocean, but to crack down on public corruption. A lot of government officials used to throw extravagant banquets for their friends and family. The new Chinese government said, “We’re not doing that anymore.” That alone saved a lot of endangered species. It was not motivated by concern about the state of the ocean, but it had that effect.

Ars Technica: People have a tendency to think that sharks are simply brutal killing machines. Why are they so important to the ecosystem?

David Shiffman: The title of my book is Why Sharks Matter because sharks do matter and people don’t think about them that way. These are food chains that provide billions of humans with food, including some of the poorest humans on Earth. They provide tens of millions of humans with jobs. When those food chains are disrupted, that’s bad for coastal communities, bad for food security and livelihoods. If we want to have healthy ocean food chains, we need a healthy top of the food chain, because when you lose the top of the food chain, the whole thing can unravel in unpredictable, but often quite devastating ways.

 So sharks play important ecological roles by holding the food chain that we all depend on in place. They’re also not a significant threat to you and your family. More people in a typical year die from flower pots falling on their head when they walk down the street. More people in a typical year die falling off a cliff when they’re trying to take a selfie of the scenery behind them, than are killed by sharks. Any human death or injury is a tragedy, and I don’t want to minimize that. But when we’re talking about global-scale policy responses, the relative risk versus reward needs to be considered.

Ars Technica:  There’s a scene in Jaws where Hooper is talking about his personal theory: territoriality, the idea that this rogue great white came in and made this his personal territory and now he’ll just keep feeding until the food runs out. Is that a real scientific premise from the 1970s and how valid is it?

The hunt begins

The town hires grizzled shark hunter Quint (Robert Shaw) to kill the great white shark. Universal Pictures

David Shiffman: Rogue sharks are nonsense. It is nonsense that is still held by some kooks who are ostensibly in my field, but it is not supported by any evidence whatsoever. In all of recorded human history, there is proof that exactly one shark bit more than one human. That was the Sharm el-Sheikh attacks around Christmas in Egypt a few years ago. Generally speaking, a lot of times it’s hard to predict why wild animals do or don’t do anything. But if this was a behavior that was real, there would be evidence that it happens and there isn’t any, despite a lot of people looking.

Was it commonly believed in the 1970s? No. Did Peter Benchley make it up? No. It’s a thing in some animals for sure. In some neighborhoods, people will pick up gators and move them hundreds of miles away; the gators will move back to that exact same spot. I think the same thing has been shown with bears. Wolves certainly have a home range. But for sharks, it’s not a thing.

Ars Technica: Quint has a famous monologue about surviving the USS Indianapolis sinking and witnessing crew members being eaten by sharks. How historically accurate is that?. 

David Shiffman: We don’t really know how many of the people who were killed following the sinking of the Indianapolis were killed by sharks. Certainly, firsthand accounts report that sharks were present. But those people were in the water because they were on a boat that exploded after being hit by a torpedo. That is not good for your health. So a lot of those people were either mortally wounded or killed by that initial explosion, and then perhaps were scavenged by sharks. Those are also people who are in the water bleeding, making a lot of noise. That’s an incredible scene in the movie. But the deaths Quint attributes to sharks is more people than have been reliably documented as killed by sharks in the history of the world ever.

Ars Technica: How accurate is Jaws in terms of how and why sharks attack humans? For instance, someone says that people splashing in the water mimics what sharks want to hunt. 

David Shiffman: Anyone who tells you they know exactly why a wild animal does or does not do something is someone who you should be a little skeptical of. But a leading theory, which I think makes sense, is this idea of mistaken identity. Some of the people who are most commonly bitten by sharks, though it’s still astronomically rare, are surfers. These are people who are cutting through the water with a silhouette that resembles a seal, wearing black neoprene, which is modeled after seal blubber. Sharks have been patrolling the ocean since before there were trees on land, and it’s only in the last hundred years or so that they’ve had to wonder, is that my preferred prey, or is it a human using technology to mimic my preferred prey for recreational purposes?

If you’ve been in the ocean, there’s been a shark not that far from you, and it knew you were there, and you probably had no idea it was there and had a pleasant day in the water. The sharks that do bite people, they take a little bite and they go, what is that? And swim away. That can be real bad if it hits a major artery or if you’re far from shore. Again, I don’t want to minimize the real harm. But it is not a shark hunting you because it has a taste for human flesh. They don’t have hands. They explore their environment with their mouths and most things in their environment they can eat.

I think Mythbusters tested fish blood versus mammal blood versus chicken blood, I think. And the sharks were attracted to fish blood and had no reaction to the others. So these are animals that are very, very, very well adapted for environmental conditions that in some cases don’t really exist anymore.

Man vs. great white

Brody fights off an increasingly aggressive great white. Universal Pictures

With humans, most of the time, what happens is an immediate bite, and then they swim away. With seals or large prey, they’ll often hit it really hard from below, sometimes knocking it completely out of the water. Or if they’re hunting whales or something that they can’t fit in their mouth, they just take a huge bite and swim away. With fish, they swallow them whole to the extent possible. Sometimes there’s a shaking motion to snap a neck or whatever. You see that with some land predators, too. It’s nothing like what’s seen there—but what an awesome scene.

Ars Technica: What is your favorite scene in Jaws and the one that makes you cringe the most?

David Shiffman: Oh, man. It’s really a great movie, and it holds up well. It was hailed as revolutionary at the time because you hardly ever see the shark. But the reason they did that was because the model of the shark that they built kept breaking. So they decided, let’s just shoot it from the shark’s eye view and save money and annoyance. I love the scene when Hooper realizes that the tiger shark that they’ve caught is obviously not the right species and the reaction that people have to that—just this idea that science and expertise can be used to solve problems. Whenever a shark bites someone, there are people who go out and kill any shark they can find and think that they’re helping.

One of my favorite professional experiences is the American Alasdair Rank Society conference. One year it was in Austin, Texas, near the original Alamo Drafthouse. Coincidentally, while we were there, the cinema held a “Jaws on the Water” event. They had a giant projector screen, and we were sitting in a lake in inner tubes while there were scuba divers in the water messing with us from below. I did that with 75 professional shark scientists. It was absolutely amazing. It helped knowing that it was a lake.

Ars Technica: If you wanted to make another really good shark movie, what would that look like today? 

David Shiffman: I often say that there are now three main movie plots: a man goes on a quest, a stranger comes to town, or there’s a shark somewhere you would not expect a shark to be. It depends if you want to make a movie that’s actually good, or one of the more fun “bad” movies like Sharknado or Sharktopus or Avalanche Sharks—the tagline of which is “snow is just frozen water.” These movies are just off the rails and absolutely incredible. The ones that don’t take themselves too seriously and are in on the joke tend to be very fun. But then you get movies like Netflix’s Under Paris (2024); they absolutely thought they were making a good movie and took themselves very seriously, and it was painful to watch.

I would love to see actual science and conservation portrayed. I’d love to see species that are not typically found in these movies featured. The Sharknado series actually did a great job of this because they talked with me and other scientists after the success of the first one. Sharknado II is thanked in my PhD dissertation, because they funded one of my chapters. In that movie, it’s not just great whites and tiger sharks and bull sharks. They have a whale shark that falls out of the sky and hits someone. They have a cookie-cutter shark that falls out of the sky and burrows through someone’s leg. There’s a lot of shark diversity out there, and it’d be nice to get that featured more.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

A shark scientist reflects on Jaws at 50 Read More »

microsoft-lays-out-its-path-to-useful-quantum-computing

Microsoft lays out its path to useful quantum computing


Its platform needs error correction that works with different hardware.

Some of the optical hardware needed to make Atom Computing’s machines work. Credit: Atom Computing

On Thursday, Microsoft’s Azure Quantum group announced that it has settled on a plan for getting error correction on quantum computers. While the company pursues its own hardware efforts, the Azure team is a platform provider that currently gives access to several distinct types of hardware qubits. So it has chosen a scheme that is suitable for several different quantum computing technologies (notably excluding its own). The company estimates that the system it has settled on can take hardware qubits with an error rate of about 1 in 1,000 and use them to build logical qubits where errors are instead 1 in 1 million.

While it’s describing the scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.

Arbitrary connections

There are similarities and differences between what the company is talking about today and IBM’s recent update of its roadmap, which described another path to error-resistant quantum computing. In IBM’s case, it makes both the software stack that will perform the error correction and the hardware needed to implement it. It uses chip-based hardware, with the connections among qubits mediated by wiring that’s laid out when the chip is fabricated. Since error correction schemes require a very specific layout of connections among qubits, once IBM decides on a quantum error correction scheme, it can design chips with the wiring needed to implement that scheme.

Microsoft’s Azure, in contrast, provides its users with access to hardware from several different quantum computing companies, each based on different technology. Some of them, like Rigetti and Microsoft’s own planned processor, are similar to IBM’s in that they have a fixed layout during manufacturing, and so can only handle codes that are compatible with their wiring layout. But others, such as those provided by Quantinuum and Atom Computing, store their qubits in atoms that can be moved around and connected in arbitrary ways. Those arbitrary connections allow very different types of error correction schemes to be considered.

It can be helpful to think of this using an analogy to geometry. A chip is like a plane, where it’s easiest to form the connections needed for error correction among neighboring qubits; longer connections are possible, but not as easy. Things like trapped ions and atoms provide a higher-dimensional system where far more complicated patterns of connections are possible. (Again, this is an analogy. IBM is using three-dimensional wiring in its processing chips, while Atom Computing stores all its atoms in a single plane.)

Microsoft’s announcement is focused on the sorts of processors that can form the more complicated, arbitrary connections. And, well, it’s taking full advantage of that, building an error correction system with connections that form a four-dimensional hypercube. “We really have focused on the four-dimensional codes due to their amenability to current and near term hardware designs,” Microsoft’s Krysta Svore told Ars.

The code not only describes the layout of the qubits and their connections, but also the purpose of each hardware qubit. Some of them are used to hang on to the value of the logical qubit(s) stored in a single block of code. Others are used for what are called “weak measurements.” These measurements tell us something about the state of the ones that are holding on to the data—not enough to know their values (a measurement that would end the entanglement), but enough to tell if something has changed. The details of the measurement allow corrections to be made that restore the original value.

Microsoft’s error correction system is described in a preprint that the company recently released. It includes a family of related geometries, each of which provides different degrees of error correction, based on how many simultaneous errors they can identify and fix. The descriptions are about what you’d expect for complicated math and geometry—”Given a lattice Λ with an HNF L, the code subspace of the 4D geometric code CΛ is spanned by the second homology H2(T4Λ,F2) of the 4-torus T4Λ—but the gist is that all of them convert collections of physical qubits into six logical qubits that can be error corrected.

The more hardware qubits you add to host those six logical qubits, the greater error protection each of them gets. That becomes important because some more sophisticated algorithms will need more than the one-in-a-million error protection that Svore said Microsoft’s favored version will provide. That favorite is what’s called the Hadamard version, which bundles 96 hardware qubits to form six logical qubits, and has a distance of eight (distance being a measure of how many simultaneous errors it can tolerate). You can compare that with IBM’s announcement, which used 144 hardware qubits to host 12 logical qubits at a distance of 12 (so, more hardware, but more logical qubits and greater error resistance).

The other good stuff

On its own, a description of the geometry is not especially exciting. But Microsoft argues that this family of error correction codes has a couple of significant advantages. “All of these codes in this family are what we call single shot,” Svore said. “And that means that, with a very low constant number of rounds of getting information about the noise, one can decode and correct the errors. This is not true of all codes.”

Limiting the number of measurements needed to detect errors is important. For starters, measurements themselves can create errors, so making fewer makes the system more robust. In addition, in things like neutral atom computers, the atoms have to be moved to specific locations where measurements take place, and the measurements heat them up so that they can’t be reused until cooled. So, limiting the measurements needed can be very important for the performance of the hardware.

The second advantage of this scheme, as described in the draft paper, is the fact that you can perform all the operations needed for quantum computing on the logical qubits these schemes host. Just like in regular computers, all the complicated calculations performed on a quantum computer are built up from a small number of simple logical operations. But not every possible logical operation works well with any given error correction scheme. So it can be non-trivial to show that an error correction scheme is compatible with enough of the small operations to enable universal quantum computation.

So, the paper describes how some logical operations can be performed relatively easily, while a few others require manipulations of the error correction scheme in order to work. (These manipulations have names like lattice surgery and magic state distillation, which are good signs that the field doesn’t take itself that seriously.)

So, in sum, Microsoft feels that it has identified an error correction scheme that is fairly compact, can be implemented efficiently on hardware that stores qubits in photons, atoms, or trapped ions, and enables universal computation. What it hasn’t done, however, is show that it actually works. And that’s because it simply doesn’t have the hardware right now. Azure is offering trapped ion machines from IonQ and Qantinuum, but these top out at 56 qubits—well below the 96 needed for their favored version of these 4D codes. The largest it has access to is a 100-qubit machine from a company called PASQAL, which barely fits the 96 qubits needed, leaving no room for error.

While it should be possible to test smaller versions of codes in the same family, the Azure team has already demonstrated its ability to work with error correction codes based on hypercubes, so it’s unclear whether there’s anything to gain from that approach.

More atoms

Instead, it appears to be waiting for another partner, Atom Computing, to field its next-generation machine, one it’s designing in partnership with Microsoft. “This first generation that we are building together between Atom Computing and Microsoft will include state-of-the-art quantum capabilities, will have 1,200 physical qubits,” Svore said “And then the next upgrade of that machine will have upwards of 10,000. And so you’re looking at then being able to go to upwards of a hundred logical qubits with deeper and more reliable computation available. “

So, today’s announcement was accompanied by an update on progress from Atom Computing, focusing on a process called “midcircuit measurement.” Normally, during quantum computing algorithms, you have to resist performing any measurements of the value of qubits until the entire calculation is complete. That’s because quantum calculations depend on things like entanglement and each qubit being in a superposition between its two values; measurements can cause all that to collapse, producing definitive values and ending entanglement.

Quantum error correction schemes, however, require that some of the hardware qubits undergo weak measurements multiple times while the computation is in progress. Those are quantum measurements taking place in the middle of a computation—midcircuit measurements, in other words. To show that its hardware will be up to the task that Microsoft expects of it, the company decided to demonstrate mid-circuit measurements on qubits implementing a simple error correction code.

The process reveals a couple of notable features that are distinct from doing this with neutral atoms. To begin with, the atoms being used for error correction have to be moved to a location—the measurement zone—where they can be measured without disturbing anything else. Then, the measurement typically heats up the atom slightly, meaning they have to be cooled back down afterward. Neither of these processes is perfect, and so sometimes an atom gets lost and needs to be replaced with one from a reservoir of spares. Finally, the atom’s value needs to be reset, and it has to be sent back to its place in the logical qubit.

Testing revealed that about 1 percent of the atoms get lost each cycle, but the system successfully replaces them. In fact, they set up a system where the entire collection of atoms is imaged during the measurement cycle, and any atom that goes missing is identified by an automated system and replaced.

Overall, without all these systems in place, the fidelity of a qubit is about 98 percent in this hardware. With error correction turned on, even this simple logical qubit saw its fidelity rise over 99.5 percent. All of which suggests their next computer should be up to some significant tests of Microsoft’s error correction scheme.

Waiting for the lasers

The key questions are when it will be released, and when its successor, which should be capable of performing some real calculations, will follow it? That’s something that’s a challenging question to ask because, more so than some other quantum computing technologies, neutral atom computing is dependent on something that’s not made by the people who build the computers: lasers. Everything about this system—holding atoms in place, moving them around, measuring, performing manipulations—is done with a laser. The lower the noise of the laser (in terms of things like frequency drift and energy fluctuations), the better performance it’ll have.

So, while Atom can explain its needs to its suppliers and work with them to get things done, it has less control over its fate than some other companies in this space.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft lays out its path to useful quantum computing Read More »

mit-student-prints-ai-polymer-masks-to-restore-paintings-in-hours

MIT student prints AI polymer masks to restore paintings in hours

MIT graduate student Alex Kachkine once spent nine months meticulously restoring a damaged baroque Italian painting, which left him plenty of time to wonder if technology could speed things up. Last week, MIT News announced his solution: a technique that uses AI-generated polymer films to physically restore damaged paintings in hours rather than months. The research appears in Nature.

Kachkine’s method works by printing a transparent “mask” containing thousands of precisely color-matched regions that conservators can apply directly to an original artwork. Unlike traditional restoration, which permanently alters the painting, these masks can reportedly be removed whenever needed. So it’s a reversible process that does not permanently change a painting.

“Because there’s a digital record of what mask was used, in 100 years, the next time someone is working with this, they’ll have an extremely clear understanding of what was done to the painting,” Kachkine told MIT News. “And that’s never really been possible in conservation before.”

Figure 1 from the paper.

Figure 1 from the paper. Credit: MIT

Nature reports that up to 70 percent of institutional art collections remain hidden from public view due to damage—a large amount of cultural heritage sitting unseen in storage. Traditional restoration methods, where conservators painstakingly fill damaged areas one at a time while mixing exact color matches for each region, can take weeks to decades for a single painting. It’s skilled work that requires both artistic talent and deep technical knowledge, but there simply aren’t enough conservators to tackle the backlog.

The mechanical engineering student conceived the idea during a 2021 cross-country drive to MIT, when gallery visits revealed how much art remains hidden due to damage and restoration backlogs. As someone who restores paintings as a hobby, he understood both the problem and the potential for a technological solution.

To demonstrate his method, Kachkine chose a challenging test case: a 15th-century oil painting requiring repairs in 5,612 separate regions. An AI model identified damage patterns and generated 57,314 different colors to match the original work. The entire restoration process reportedly took 3.5 hours—about 66 times faster than traditional hand-painting methods.

A handout photo of Alex Kachkine, who developed the AI printed film technique.

Alex Kachkine, who developed the AI-printed film technique. Credit: MIT

Notably, Kachkine avoided using generative AI models like Stable Diffusion or the “full-area application” of generative adversarial networks (GANs) for the digital restoration step. According to the Nature paper, these models cause “spatial distortion” that would prevent proper alignment between the restored image and the damaged original.

MIT student prints AI polymer masks to restore paintings in hours Read More »

spacex’s-next-starship-just-blew-up-on-its-test-stand-in-south-texas

SpaceX’s next Starship just blew up on its test stand in South Texas


SpaceX had high hopes for Starship in 2025, but it’s been one setback after another.

A fireball erupts around SpaceX’s Starship rocket in South Texas late Wednesday night. Credit: LabPadre

SpaceX’s next Starship rocket exploded during a ground test in South Texas late Wednesday, dealing another blow to a program already struggling to overcome three consecutive failures in recent months.

The late-night explosion at SpaceX’s rocket development complex in Starbase, Texas, destroyed the bullet-shaped upper stage that was slated to launch on the next Starship test flight. The powerful blast set off fires around SpaceX’s Massey’s Test Site, located a few miles from the company’s Starship factory and launch pads.

Live streaming video from NASASpaceflight.com and LabPadremedia organizations with cameras positioned around Starbase—showed the 15-story-tall rocket burst into flames shortly after 11: 00 pm local time (12: 00 am EDT; 04: 00 UTC). Local residents as far as 30 miles away reported seeing and feeling the blast.

SpaceX confirmed the Starship, numbered Ship 36 in the company’s inventory, “experienced a major anomaly” on a test stand as the vehicle prepared to ignite its six Raptor engines for a static fire test. These hold-down test-firings are typically one of the final milestones in a Starship launch campaign before SpaceX moves the rocket to the launch pad.

The explosion occurred as SpaceX finished up loading super-cold methane and liquid oxygen propellants into Starship in preparation for the static fire test. The company said the area around the test site was evacuated of all personnel, and everyone was safe and accounted for after the incident. Firefighters from the Brownsville Fire Department were dispatched to the scene.

“Our Starbase team is actively working to safe the test site and the immediate surrounding area in conjunction with local officials,” SpaceX posted on X. “There are no hazards to residents in surrounding communities, and we ask that individuals do not attempt to approach the area while safing operations continue.”

Picking up the pieces

Earlier Wednesday, just hours before the late-night explosion at Starbase, an advisory released by the Federal Aviation Administration showed SpaceX had set June 29 as a tentative launch date for the next Starship test flight. That won’t happen now, and it’s anyone’s guess when SpaceX will have another Starship ready to fly.

Massey’s Test Site, named for a gun range that once occupied the property, is situated on a bend in the Rio Grande River, just a few hundred feet from the Mexican border. The test site is currently the only place where SpaceX can put Starships through proof testing and static fire tests before declaring the rockets are ready to fly.

The extent of the damage to ground equipment at Massey’s was not immediately clear, so it’s too soon to say how long the test site will be out of commission. For now, though, the explosion leaves SpaceX without a facility to support preflight testing on Starships.

The videos embedded below come from NASASpaceflight.com and LabPadre, showing multiple angles of the Starship blast.

The explosion at Massey’s is a reminder of SpaceX’s rocky path to get Starship to this point in its development. In 2020 and 2021, SpaceX lost several Starship prototypes to problems during ground and flight testing. The visual of Ship 36 going up in flames harkens back to those previous explosions, along with the fiery demise of a Falcon 9 rocket on its launch pad in 2016 under circumstances similar to Wednesday night’s incident.

SpaceX has now launched nine full-scale Starship rockets since April 2023, and before the explosion, the company hoped to launch the 10th test flight later this month. Starship’s track record has been dreadful so far this year, with the rocket’s three most recent test flights ending prematurely. These setbacks followed a triumphant 2024, when SpaceX made clear progress on each successive Starship suborbital test flight, culminating in the first catch of the rocket’s massive Super Heavy booster with giant robotic arms on the launch pad tower.

Stacked together, the Super Heavy booster stage and Starship upper stage stand more than 400 feet tall, creating the largest rocket ever built. SpaceX has already flown a reused Super Heavy booster, and the company has designed Starship itself to be recoverable and reusable, too.

After last year’s accomplishments, SpaceX appeared to be on track for a full orbital flight, an attempt to catch and recover Starship itself, and an important in-space refueling demonstration in 2025. The refueling demo has officially slipped into 2026, and it’s questionable whether SpaceX will make enough progress in the coming months to attempt recovery of a ship before the end of this year.

A Super Heavy booster and Starship upper stage are seen in March at SpaceX’s launch pad in South Texas, before the ship was stacked atop the booster for flight. The Super Heavy booster for the next Starship flight completed its static fire test earlier this month. Credit: Brandon Bell/Getty Images

Ambition meets reality

SpaceX debuted an upgraded Starship design, called Version 2 or Block 2, on a test flight in January. It’s been one setback after another since then.

The new Starship design is slightly taller than the version of Starship that SpaceX flew in 2023 and 2024. It has an improved heat shield to better withstand the extreme heat of atmospheric reentry. SpaceX also installed a new fuel feed line system to route methane fuel to the ship’s Raptor engines, and an improved propulsion avionics module controlling the vehicle’s valves and reading sensors.

Despite—or perhaps because ofall of these changes for Starship Version 2, SpaceX has been unable to replicate the successes it achieved with Starship in the last two years. Ships launched on test flights in January and March spun out of control minutes after liftoff, scattering debris over the sea, and in at least one case, onto a car in the Turks and Caicos Islands.

SpaceX engineers concluded the January failure was likely caused by intense vibrations that triggered fuel leaks and fires in the ship’s engine compartment, causing an early shutdown of the rocket’s engines. Engineers said the vibrations were likely in resonance with the vehicle’s natural frequency, intensifying the shaking beyond the levels SpaceX predicted.

The March flight failed in similar fashion, but SpaceX’s investigators determined the most probable root cause was a hardware failure in one of the ship’s engines, a different failure mode than two months before.

During SpaceX’s most recent Starship test flight last month, the rocket completed the ascent phase of the mission as planned, seemingly overcoming the problems that plagued the prior two launches. But soon after the Raptor engines shut down, a fuel leak caused the ship to begin tumbling in space, preventing the vehicle from completing a guided reentry to test the performance of new heat shield materials.

File photo of a Starship static fire in May at Massey’s Test Site.

SpaceX is working on a third-generation Starship design, called Version 3, that the company says could be ready to fly by the end of this year. The upgraded Starship Version 3 design will be able to lift heavier cargo—up to 200 metric tonsinto orbit thanks to larger propellant tanks and more powerful Raptor engines. Version 3 will also have the ability to refuel in low-Earth orbit.

Version 3 will presumably have permanent fixes to the problems currently slowing SpaceX’s pace of Starship development. And there are myriad issues for SpaceX’s engineers to solve, from engine reliability and the ship’s resonant frequency, to beefing up the ship’s heat shield and fixing its balky payload bay door.

Once officials solve these problems, it will be time for SpaceX to bring a Starship from low-Earth orbit back to the ground. Then, there’s more cool stuff on the books, like orbital refueling and missions to the Moon in partnership with NASA’s Artemis program. NASA has contracts worth more than $4 billion with SpaceX to develop a human-rated Starship that can land astronauts on the Moon and launch them safely back into space.

The Trump administration’s proposed budget for NASA would cancel the Artemis program’s ultra-expensive Space Launch System rocket and Orion crew capsule after two more flights, leaving commercial heavy-lifters to take over launching astronauts from the Earth to the Moon. SpaceX’s Starship, already on contract with NASA as a human-rated lander, may eventually win more government contracts to fill the role of SLS and Orion under Trump’s proposed budget. Other rockets, such as Blue Origin’s New Glenn, are also well-positioned to play a larger role in human space exploration.

NASA’s official schedule for the first Artemis crew landing on the Moon puts the mission some time in 2027, using SLS and Orion to transport astronauts out to the vicinity of the Moon to meet up with SpaceX’s Starship lunar lander. After that mission, known as Artemis III, NASA would pivot to using commercial rockets from Elon Musk’s SpaceX and Jeff Bezos’ Blue Origin to replace the Space Launch System.

Meanwhile, SpaceX’s founder and CEO has his sights set on Mars. Last month, Musk told his employees he wants to launch the first Starships toward the Red Planet in late 2026, when the positions of Earth and Mars in the Solar System make a direct journey possible. Optimistically, he would like to send people to Mars on Starships beginning in 2028.

All of these missions are predicated on SpaceX mastering routine Starship launch operations, rapid reuse of the ship and booster, and cryogenic refueling in orbit, along with adapting systems such as life support, communications, and deep space navigation for an interplanetary journey.

The to-do list is long for SpaceX’s Starship program—too long for Mars landings to seem realistic any time in the next few years. NASA’s schedule for the Artemis III lunar landing mission in 2027 is also tight, and not only because of Starship’s delays. The development of new spacesuits for astronauts to wear on the Moon may also put the Artemis III schedule at risk. NASA’s SLS rocket and Orion spacecraft have had significant delays throughout their history, so it’s not a sure thing they will be ready in 2027.

While it’s too soon to know the precise impact of Wednesday night’s explosion, we can say with some confidence that the chances of Starship meeting these audacious schedules are lower today than they were yesterday.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

SpaceX’s next Starship just blew up on its test stand in South Texas Read More »

new-dating-for-white-sands-footprints-confirms-controversial-theory

New dating for White Sands footprints confirms controversial theory

Some of the sediment layers contained the remains of ancient grass seeds mixed with the sediment. Bennett and his colleagues radiocarbon-dated seeds from the layer just below the oldest footprints and the layer just above the most recent ones. According to those 2021 results, the oldest footprints were made sometime after 23,000 years ago; the most recent ones were made sometime before 21,000 years ago.

At that time, the northern half of the continent was several kilometers below massive sheets of ice. The existence of 23,000-year-old footprints could only mean that people were already living in what’s now New Mexico before the ice sheets sealed off the southern half of the continent from the rest of the world for the next few thousand years.

Ancient human footprints found in situ at at White Sands National Park in New Mexico.

Ancient human footprints found in situ at White Sands National Park in New Mexico. Credit: Jeffrey S. Pigati et al., 2023

Other researchers were skeptical of those results, pointing out that the aquatic plants (Ruppia cirrhosa) analyzed were prone to absorbing the ancient carbon in groundwater, which could have skewed the findings and made the footprints seem older than they actually were. And the pollen samples weren’t taken from the same sediment layers as the footprints.

So the same team followed up by radiocarbon-dating pollen sampled from the same layers as some of the footprints—those that weren’t too thin for sampling. This pollen came from pine, spruce, and fir trees, i.e., terrestrial plants, thereby addressing the issue of groundwater carbon seeping into samples. They also analyzed quartz grains taken from clay just above the lowest layer of footprints using a different method, optically stimulated luminescence dating. They published those findings in 2023, which agreed with their earlier estimate.

New dating for White Sands footprints confirms controversial theory Read More »

via-the-false-claims-act,-nih-puts-universities-on-edge

Via the False Claims Act, NIH puts universities on edge


Funding pause at U. Michigan illustrates uncertainty around new language in NIH grants.

University of Michigan students walk on the UM campus next to signage displaying the University’s “Core Values” on April 3, 2025 in Ann Arbor, Michigan. Credit: Bill Pugliano/Getty Images

Earlier this year, a biomedical researcher at the University of Michigan received an update from the National Institutes of Health. The federal agency, which funds a large swath of the country’s medical science, had given the green light to begin releasing funding for the upcoming year on the researcher’s multi-year grant.

Not long after, the researcher learned that the university had placed the grant on hold. The school’s lawyers, it turned out, were wrestling with a difficult question: whether to accept new terms in the Notice of Award, a legal document that outlines the grant’s terms and conditions.

Other researchers at the university were having the same experience. Indeed, Undark’s reporting suggests that the University of Michigan—among the top three university recipients of NIH funding in 2024, with more than $750 million in grants—had quietly frozen some, perhaps all, of its incoming NIH funding dating back to at least the second half of April.

The university’s director of public affairs, Kay Jarvis, declined to comment for this article or answer a list of questions from Undark, instead pointing to the institution’s research website.

In conversations with Michigan scientists, and in internal communications obtained by Undark, administrators explained the reason for the delays: University officials were concerned about new language in NIH grant notices. That language said that universities will be subject to liability under a Civil War-era statute called the False Claims Act if they fail to abide by civil rights laws and a January 20 executive order related to gender.

For the most part, public attention to NIH funding has focused on what the new Trump administration is doing on its end, including freezing and terminating grants at elite institutions for alleged Title VI and IX violations, and slashing funding for newly disfavored areas of research. The events in Ann Arbor show how universities themselves are struggling to cope with a wave of recent directives from the federal government.

The new terms may expose universities to significant legal risk, according to several experts. “The Trump administration is using the False Claims Act as a massive threat to the bottom lines of research institutions,” said Samuel Bagenstos, a law professor at the University of Michigan, who served as general counsel for the Department of Health and Human Services during the Biden administration. (Bagenstos said he has not advised the university’s lawyers on this issue.) That law entitles the government to collect up to three times the financial damage. “So potentially you could imagine the Trump administration seeking all the federal funds times three that an institution has received if they find a violation of the False Claims Act.”

Such an action, Bagenstos and another legal expert said, would be unlikely to hold up in court. But the possibility, he said, is enough to cause concern for risk-averse institutions.

The grant pauses unsettled the affected researchers. One of them noted that the university had put a hold on a grant that supported a large chunk of their research program. “I don’t have a lot of money left,” they said.

The researcher worried that if funds weren’t released soon, personnel would have to be fired and medical research halted. “There’s a feeling in the air that somebody’s out to get scientists,” said the researcher, reflecting on the impact of all the changes at the federal level. “And it could be your turn tomorrow for no clear reason.” (The researcher, like other Michigan scientists interviewed for this story, spoke on condition of anonymity for fear of retaliation.)

Bagenstos said some other universities had also halted funding—a claim Undark was unable to confirm. At Michigan, at least, money is now flowing: On Wednesday, June 11, just hours after Undark sent a list of questions to the university’s public affairs office, some researchers began receiving emails saying their funding would be released. And research administrators received a message stating that the university would begin releasing the more than 270 awards that it had placed on hold.

The federal government distributes tens of billions of dollars each year to universities through NIH funding. In the past, the terms of those grants have required universities to comply with civil rights laws. More recently, though, the scope of those expectations has expanded. Multiple recent award notices viewed by Undark now contain language referring to a January 20 executive order that states the administration “will defend women’s rights and protect freedom of conscience by using clear and accurate language and policies that recognize women are biologically female, and men are biologically male.” The notices also contain four bullet points, one of which asks the grant recipient—meaning the researcher’s institution—to acknowledge that “a knowing false statement” regarding compliance is subject to liability under the False Claims Act.

Read an NIH Notice of Award

Alongside this change, on April 21, the agency issued a policy requiring universities to certify that they will not participate in discriminatory DEI activities or boycotts of Israel, noting that false statements would be subject to penalties under the False Claims Act. (That measure was rescinded in early June, reinstated, and then rescinded again while the agency awaits further White House guidance.) Additionally, in May, an announcement from the Department of Justice encouraged use of the False Claims Act in civil rights enforcement.

Some experts said that signing onto FCA terms could put universities in a vulnerable position, not because they aren’t following civil rights laws, but because the new grant language is vague and seemingly ripe for abuse.

The False Claims Act says someone who knowingly submits a false claim to the government can be held liable for triple damages. In the case of a major research institution like the University of Michigan, worst-case scenarios could range into the billions of dollars.

It’s not just the dollar amount that may cause schools to act in a risk-averse way, said Bagenstos. The False Claims Act also contains what’s known as a “qui tam” provision, which allows private entities to file a lawsuit on behalf of the United States and then potentially take a piece of the recovery money. “The government does not have the resources to identify and pursue all cases of legitimate fraud” in the country, said Bagenstos, so generally the provision is a useful one. But it can be weaponized when “yoked to a pernicious agenda of trying to suppress speech by institutions of higher learning, or simply to try to intimidate them.”

Avoiding the worst-case scenario might seem straightforward enough: Just follow civil rights laws. But in reality, it’s not entirely clear where a university’s responsibility starts and stops. For example, an institution might officially adopt policies that align with the new executive orders. But if, say, a student group, or a sociology department, steps out of bounds, then the university might be understood to not be in compliance—particularly by a less-than-friendly federal administration.

University attorneys may also balk at the ambiguity and vagueness of terms like “gender ideology” and “DEI,” said Andrew Twinamatsiko, a director of the Center for Health Policy and the Law at the O’Neill Institute at Georgetown Law. Litigation-averse universities may end up rolling back their programming, he said, because they don’t want to run afoul of the government’s overly broad directives.

“I think this is a time that calls for some courage,” said Bagenstos. If every university decides the risks are too great, then the current policies will prevail without challenge, he said, even though some are legally unsound. And the bar for False Claims Act liability is actually quite high, he pointed out: There’s a requirement that the person knowingly made a false statement or deliberately ignored facts. Universities are actually well-positioned to prevail in court, said Bagenstos and other legal experts. The issue is that they don’t want to engage in drawn-out and potentially costly litigation.

One possibility might be for a trade group, such as the Association of American Universities, to mount the legal challenge, said Richard Epstein, a libertarian legal scholar. In his view, the new NIH terms are unconstitutional because such conditions on spending, which he characterized as “unrelated to scientific endeavors,” need to be authorized by Congress.

The NIH did not respond to repeated requests for comment.

Some people expressed surprise at the insertion of the False Claims Act language.

Michael Yassa, a professor of neurobiology and behavior at the University of California, Irvine, said that he wasn’t aware of the new terms until Undark contacted him. The NIH-supported researcher and study-section chair started reading from a recent Notice of Award during the interview. “I can’t give you a straight answer on this one,” he said, and after further consideration, added, “Let me run this by a legal team.”

Andrew Miltenberg, an attorney in New York City who’s nationally known for his work on Title IX litigation, was more pointed. “I don’t actually understand why it’s in there,” he said, referring to the new grant language. “I don’t think it belongs in there. I don’t think it’s legal, and I think it’s going to take some lawsuits to have courts interpret the fact that there’s no real place for it.

This article was originally published on Undark. Read the original article.

Via the False Claims Act, NIH puts universities on edge Read More »

spanish-blackout-report:-power-plants-meant-to-stabilize-voltage-didn’t

Spanish blackout report: Power plants meant to stabilize voltage didn’t

The blackout that took down the Iberian grid serving Spain and Portugal in April was the result of a number of smaller interacting problems, according to an investigation by the Spanish government. The report concludes that several steps meant to address a small instability made matters worse, eventually leading to a self-reinforcing cascade where high voltages caused power plants to drop off the grid, thereby increasing the voltage further. Critically, the report suggests that the Spanish grid operator had an unusually low number of plants on call to stabilize matters, and some of the ones it did have responded poorly.

The full report will be available later today; however, the government released a summary ahead of its release. The document includes a timeline of the events that triggered the blackout, as well as an analysis of why grid management failed to keep it in check. It also notes that a parallel investigation checked for indications of a cyberattack and found none.

Oscillations and a cascade

The document notes that for several days prior to the blackout, the Iberian grid had been experiencing voltage fluctuations—products of a mismatch between supply and demand—that had been managed without incident. These continued through the morning of April 28 until shortly after noon, when an unusual frequency oscillation occurred. This oscillation has been traced back to a single facility on the grid, but the report doesn’t identify it or even indicate its type, simply referring to it as an “instalación.”

The grid operators responded in a way that suppressed the oscillations but increased the voltages on the grid. About 15 minutes later, a weakened version of this oscillation occurred again, followed shortly thereafter by oscillations at a different frequency, this one with properties that are commonly seen on European grids. That prompted the grid operators to take corrective steps again, which increased the voltages on the grid.

The Iberian grid is capable of handling this sort of thing. But the grid operator only scheduled 10 power plants to handle voltage regulation on the 28th, which the report notes is the lowest total it had committed to in all of 2025 up to that point. The report found that a number of those plants failed to respond properly to the grid operators, and a few even responded in a way that contributed to the surging voltages.

Spanish blackout report: Power plants meant to stabilize voltage didn’t Read More »

we’ve-had-a-denisovan-skull-since-the-1930s—only-nobody-knew

We’ve had a Denisovan skull since the 1930s—only nobody knew


It’s a Denisovan? Always has been.

After years of mystery, we now know what at least one Denisovan looked like.

A 146,000-year-old skull from Harbin, China, belongs to a Denisovan, according to a recent study of proteins preserved inside the ancient bone. The paleoanthropologists who studied the Harbin skull in 2021 declared it a new (to us) species, Homo longi. But the Harbin skull still contains enough of its original proteins to tell a different story: A few of them matched specific proteins from Denisovan bones and teeth, as encoded in Denisovan DNA.

So Homo longi was a Denisovan all along, and thanks to the remarkably well-preserved skull, we finally know what the enigmatic Denisovans actually looked like.

Two early-human skulls against a black background.

Credit: Ni et al. 2021

The Harbin skull (left) and the Dali skull (right).

Unmasking Dragon Man 

Paleoanthropologist Qiang Ji, of the Chinese Academy of Sciences, and colleagues tried to sequence ancient DNA from several samples of the Harbin skull’s bone and its one remaining tooth, but they had no luck. Proteins tend to be hardier molecules than DNA, though, and in samples from the skull’s temporal bone (the ones on the sides of the head, just behind the cheekbones), the researchers struck pay dirt.

They found fragments of a total of 95 proteins. Four of these had variations that were distinct to the Denisovan lineage, and the Harbin skull matched Denisovans on three of them. That’s enough to confidently say that the Harbin skull had belonged to a Denisovan. So for the past few years, we’ve had images of an almost uncannily well-preserved Denisovan skull—which is a pretty big deal, especially when you consider its complicated history.

While the world is now aware of it, until 2021, only one person had known what the skull looked like since its discovery in the 1930s. It was unearthed in Harbin, in northeast China, during the Japanese occupation of the area. Not wanting it to be seized by the occupying government, the person who found the skull immediately hid it, and he kept it hidden for most of the rest of his life.

He eventually turned it over to scientists in 2018, who published their analysis in 2021. That analysis placed the Harbin skull, along with a number of other fossils from China, in a distinct lineage within our genus, Homo, making them our species’ closest fossil relatives. They called this alleged new species Homo longi, or “Dragon Man.”

The decision to classify Homo longi as a new species was largely due to the skull’s unique combination of features (which we’ll discuss below). But it was a controversial decision, partly because paleoanthropologists don’t entirely agree about whether we should even call Neanderthals a distinct species. If the line between Neanderthals and our species is that blurry, many in the field have questioned whether Homo longi could be considered a distinct species, when it’s even closer to us than the Neanderthals.

Meanwhile, the 2021 paper also left room for debate on whether the skull might actually have belonged to a Denisovan rather than a distinct new species. Its authors acknowledge that one of the fossils they label as Homo longi had already been identified as a Denisovan based on its protein sequences. They also point out that the Harbin skull has rather large molars, which seem to be a common feature in Denisovans.

The paper’s authors argued that their Homo longi should be a separate branch of the hominin lineage, more closely related to us than to Denisovans or Neanderthals. But if the Harbin skull looked so much like Denisovan fossils and so little like fossils from our species, the alleged relationship begins to look pretty dubious. In the end, the 2021 paper’s authors dodged the issue by saying that “new genetic material will test the relationship of these populations to each other and to the Denisovans.”

Which turned out to be exactly what happened.

A ghost lineage comes to life

Denisovans are the ghost in our family tree. For scientists, a “ghost lineage” is one that’s known mostly from genetic evidence, not fossils; like a ghost, it has a presence we can sense but no physical form we can touch. With the extremely well-preserved Harbin skull identified as a Denisovan, though, we’re finally able to look our “ghost” cousins in the face.

Paleogeneticists have recovered Denisovan DNA from tiny fragments of bone and teeth, and even from the soil of a cave floor. Genomics researchers have found segments of Denisovan DNA woven into the genomes of some modern humans, revealing just how close our two species once were. But the handful of Denisovan fossils paleoanthropologists have unearthed are mostly small fragments—a finger bone here, a tooth there, a jawbone someplace else—that don’t reveal much about how Denisovans lived or what they looked like.

We know they existed and that they were something slightly different from Homo sapiens or Neanderthals. We even know when and where they lived and a surprising amount about their genetics, and we have some very strong hints about how they interacted with our species and with Neanderthals. But we didn’t really know what they looked like, and we couldn’t hope to identify their fossils without turning to DNA or protein sequences.

Until now.

Neanderthals and Denisovans probably enjoyed the view from Denisova Cave, too. Credit: loronet / Flickr

The face of a Denisovan

So what did a Denisovan look like? Harbin 1 has a wide, flattish face with small cheekbones, big eye sockets, and a heavy brow. Its upper jaw juts forward just a little, and it had big, robust molars. The cranium itself is longer and less dome-like than ours, but it’s roomy enough for a big brain (about 1,420 millimeters).

Some of those traits, like the large molars and the long, low cranium, resemble those of earlier hominin species such as Homo erectus or Homo heidelbergensis. Others, like a relatively flat face, set beneath the cranium instead of sticking out in front of it, look more like us. (Early hominins, like Australopithecus afarensis, don’t really have foreheads because their skulls are arranged so their brains are right behind their faces instead of partly above them, like ours.)

In other words, Harbin’s features are what paleoanthropologists call a mosaic, with some traits that look like they come from older lineages and some that seem more modern. Mosaics are common in the hominin family tree.

But for all the detail it reveals about the Denisovans, Harbin is still just one skull from one individual. Imagine trying to reconstruct all the diversity of human faces from just one skull. We have to assume that Densiovans—a species that spanned a huge swath of our planet, from Siberia to Taiwan, and a wide range of environments, from high-altitude plateaus in Tibet to subtropical forests—were also a pretty diverse species.

It’s also worth remembering that the Harbin skull is exactly that: a skull. It can’t tell us much about how tall its former user was, how they were built, or how they moved or worked during their life. We can’t even say for sure whether Harbin is osteologically or genetically male or female. In other words, some of the mystery of the Denisovans still endures.

What’s next?

In the 2021 papers, the researchers noted that the Harbin skull also bears a resemblance to a 200,000- to 260,000-year-old skull found in Dali County in northwestern China, a roughly 300,000-year-old skull found in Hualong Cave in eastern China, and a 260,000-year-old skull from Jinniushi (sometimes spelled Jinniushan) Cave in China. And some fossils from Taiwan and northern China have molars that look an awful lot like those in that Tibetan jawbone.

“These hominins potentially also belong to Denisovan populations,” write Ji and colleagues. That means we might already have a better sample of Denisovan diversity than this one skull suggests.

And, like the Harbin skull, the bones and teeth of those other fossils may hold ancient DNA or proteins that could help confirm that intriguing possibility.

Science, 2023 DOI: 10.1126/science.adu9677 (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

We’ve had a Denisovan skull since the 1930s—only nobody knew Read More »

honda’s-hopper-suddenly-makes-the-japanese-carmaker-a-serious-player-in-rocketry

Honda’s hopper suddenly makes the Japanese carmaker a serious player in rocketry

The company has not disclosed its spending on rocket development. Honda’s hopper is smaller than similar prototype boosters SpaceX has used for vertical landing demos, so engineers will have to scale up the design to create a viable launch vehicle.

But Tuesday’s test catapulted Honda into an exclusive club of companies that have flown reusable rocket hoppers with an eye toward orbital flight, including SpaceX, Blue Origin, and a handful of Chinese startups. Meanwhile, European and Japanese space agencies have funded a pair of reusable rocket hoppers named Themis and Callisto. Neither rocket has ever flown, after delays of several years.

Honda’s experimental rocket lifts off from a test site in Taiki, a community in northern Japan.

Before Honda’s leadership green-lit the rocket project in 2019, a group of the company’s younger engineers proposed applying the company’s expertise in combustion and control technologies toward a launch vehicle. Honda officials believe the carmaker “has the potential to contribute more to people’s daily lives by launching satellites with its own rockets.”

The company suggested in its press release Tuesday that a Honda-built rocket might launch Earth observation satellites to monitor global warming and extreme weather, and satellite constellations for wide-area communications. Specifically, the company noted the importance of satellite communications to enabling connected features in cars, airplanes, and other Honda products.

“In this market environment, Honda has chosen to take on the technological challenge of developing reusable rockets by utilizing Honda technologies amassed in the development of various products and automated driving systems, based on a belief that reusable rockets will contribute to achieving sustainable transportation,” Honda said.

Toyota, Japan’s largest car company, also has a stake in the launch business. Interstellar Technologies, a Japanese space startup, announced a $44 million investment from Toyota in January. The two firms said they were establishing an alliance to draw on Toyota’s formula for automobile manufacturing to set up a factory for mass-producing orbital-class rockets. Interstellar has launched a handful of sounding rockets but hasn’t yet built an orbital launcher.

Japan’s primary rocket builder, Mitsubishi Heavy Industries, is another titan of Japanese industry, but it has never launched more than six space missions in a single year. MHI’s newest rocket, the H3, debuted in 2023 but is fully expendable.

The second-biggest Japanese automaker, Honda, is now making its own play. Car companies aren’t accustomed to making vehicles that can only be used once.

Honda’s hopper suddenly makes the Japanese carmaker a serious player in rocketry Read More »