Science

the-first-people-to-set-foot-in-australia-were-fossil-hunters

The first people to set foot in Australia were fossil hunters


I just think they’re neat

Europeans weren’t the first people to collect fossils in Australia.

Several species of short-faced kangaroos, like this one, once lived in Australia. Some stood two meters tall, while others were less than half a meter tall. Credit: By Ghedoghedo – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=8398432

Australia’s First Peoples may or may not have hunted the continent’s megafauna to extinction, but they definitely collected fossils.

A team of archaeologists examined the fossilized leg bone of an extinct kangaroo and realized that instead of evidence of butchery, cut marks on the bone reveal an ancient attempt at fossil collecting. That leaves Australia with little evidence of First Peoples hunting or butchering the continent’s extinct megafauna—and reopens the question of whether humans were responsible for the die-off of that continent’s giant Ice Age marsupials.

Fossil hunting in the Ice Age

In the unsolved case of whether humans hunted Australia’s Ice Age megafauna to extinction, the key piece of evidence so far is a tibia (one of the bones of the lower leg) from an extinct short-faced kangaroo. Instead of hopping like their modern relatives, these extinct kangaroos walked on their hind legs, probably placing all their weight on the tips of single hoofed toes. This particular kangaroo wasn’t quite fully grown when it died, which happened sometime between 44,500 and 55,200 years ago, based on uranium-series dating of the thin layer of rock covering most of the fossils in Mammoth Cave (in what’s now Western Australia).

There’s a shallow, angled chunk cut out of the bone near one end. When archaeologists first noticed the cut in 1970 after carefully chipping away the crust of calcium carbonate that had formed over the bone, it looked like evidence that Pleistocene hunters had carved up the kangaroo to eat it. But in their recent paper, University of New South Wales archaeologist Michael Archer and his colleagues say that’s probably not what happened. Instead, they have a stranger idea: “We suggest here that the purpose of this effort may have been the retrieval of the fossils from the bone-rich late-Pleistocene deposit in Mammoth Cave after its discovery by First Peoples,” they wrote in their recent paper.

a photo of a fossil bone with a shallow chunk cut out of it

This close-up image shows the cut kangaroo bone and a micro-CT image of the surfaces of the cut. Credit: Archer et al. 2025

The world used to be so much weirder

Based on the available archaeological evidence, it looks like people first set foot on Australia sometime around 65,000 years ago. At the time, the continent was home to a bizarre array of giant marsupials, as well as flightless birds even bigger and scarier than today’s emus and cassowaries. For the next 20,000 years, Australia’s First Peoples shared the landscape with short-faced kangaroos; Zygomaturus trilobus, a hulking 500-kilogram marsupial that looked a little like a rhinoceros; and Diprotodon optatum, the largest marsupial that ever lived: a 3,000-kilogram behemoth that roamed in huge herds (picture a bear about the size of a bison with a woodchuck’s face).

These species died out sometime around 45,000 or 40,000 years ago; today, they live on in ancient rock art and stories, some of which seem to describe people interacting with now-extinct species.

Since they had shared the continent with humans for at least 20,000 years at that point, it doesn’t seem that the sudden arrival of humans caused an immediate mass extinction. But it’s possible that by hunting or even setting controlled fires, people may have put just enough strain on these megafauna species to make them vulnerable enough for the next climate upheaval to finish them off.

In some parts of the world, there’s direct evidence that Pleistocene people hunted or scavenged meat from the remains of now-extinct megafauna. Elsewhere, archaeologists are still debating whether humans, the inexorable end of the last Ice Age, or some combination of the two killed off the world’s great Ice Age giants. The interaction between people and their local ecosystems looked (and still looks) different everywhere, depending on culture, environment, and a host of other factors.

The jury is still out on what killed the megafauna in Australia because the evidence we need either hasn’t survived the intervening millennia or still lies buried somewhere, waiting to be found and studied. For decades, the one clear bit of evidence has seemed to be the Mammoth Cave short-faced kangaroo tibia. But Archer and his colleagues argue that even that isn’t a smoking gun.

An man in khakis and a dark blue shirt studies a cave wall.

An archaeologist examines a fossil deposit in the wall of Mammoth Cave, in Western Australia. 50,000 years ago, one of the earliest people on the continent may also have stood here contemplating the fossils. Credit: Archer et al. 2025

Evidence of rock collecting, not butchery

For one thing, the researchers argue that the kangaroo had been dead for a very long time when the cut was made. Nine long, thin cracks run along the length of the tibia, formed when the bone dried and shrank. And in the cut section, there’s a short crack running across the width of the bone—but it stops at either end when it meets the long cracks from the bone’s drying. That suggests the bone had already dried and shrunk, leaving those long cracks before the cut was made. It may have just been a very old bone, or it may have already begun to fossilize, but the meat would have been long gone, leaving behind a bone sticking out of the cave wall.

Since there’s no mark or dent on the opposite side of the bone from the cut (which would have happened if it were lying on the ground being butchered), it was probably sticking out of the fossil bed in the cave wall when someone came along and tried to cut it free. And since a crust of calcium carbonate had time to form over the cut (it covers most of the fossils in Mammoth Cave like a rocky burial shroud), that must have happened at least 44,000 years ago.

That leaves us with an interesting mental image: a member of one of Australia’s First Peoples, 45,000 years ago, exploring a cave filled with the bones of fantastical, long-dead animals. This ancient caver finds a bone sticking out from the cave wall and tries to hack the protruding end free—twice, from different angles—before giving up and leaving it in place.

People have always collected cool rocks

We can’t know for sure why this long-ago person wanted the bone in the first place. (Did it have a religious purpose? Might it have made a good tool? Was it just a cool souvenir?) We also don’t know why they gave up their attempt. But if Archer and his colleagues are right, the bone leaves Australia without any clear evidence that ancient people hunted—or even scavenged food from the remains of—extinct Pleistocene megafauna like short-faced kangaroos.

“This is not to say that it did not happen, just that there is now no hard evidence to support that it did,” Archer and his colleagues wrote in their recent paper. We don’t yet know exactly how Australia’s First Peoples interacted with these species.

But whether Archer and his colleagues are correct in their analysis of this particular kangaroo bone or not, humans around the world have been picking up fossils for at least tens of thousands of years. There’s evidence that people in Australia have collected and traded the fossils of extinct animals for pretty much as long as people have been in Australia, including everything from trilobites to Zygomaturus teeth and the jawbones of other extinct marsupials.

“What we can conclude,” Archer and his colleagues wrote, “is that the first people in Australia who demonstrated a keen interest in and collected fossils were First Peoples, probably thousands of years before Europeans set foot on that continent.”

Royal Society Open Science, 2025. DOI: 10.1098/rsos.250078  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

The first people to set foot in Australia were fossil hunters Read More »

california-startup-to-demonstrate-space-weapon-on-its-own-dime

California startup to demonstrate space weapon on its own dime


“All of the pieces that are required to make it viable exist.”

This illustration released by Apex depicts a space-based interceptor fired from a satellite in low-Earth orbit. Credit: Apex

Defense contractors are in full sales mode to win a piece of a potentially trillion-dollar pie for development of the Trump administration’s proposed Golden Dome missile shield.

CEOs are touting their companies’ ability to rapidly spool up satellite, sensor, and rocket production. Publicly, they all agree with the assertion of Pentagon officials that US industry already possesses the technologies required to make a homeland missile defense system work.

The challenge, they say, is tying all of it together under the umbrella of a sophisticated command and control network. Sensors must be able to detect and track missile threats, and that information must rapidly get to weapons that can shoot them down. Gen. Chance Saltzman, the Space Force’s top commander, likes to call Golden Dome a “systems of systems.”

One of these systems stands apart. It’s the element that was most controversial when former President Ronald Reagan announced the Strategic Defense Initiative or “Star Wars” program, a concept similar to Golden Dome that fizzled after the end of the Cold War.

Like the Star Wars concept 40 years ago, Golden Dome’s pièce de résistance will be a fleet of space-based interceptors loitering in orbit a few hundred miles overhead, ready to shoot down missiles shortly after they are launched. Pentagon officials haven’t disclosed the exact number of interceptors required to fulfill Golden Dome’s mission of defending the United States against a volley of incoming missiles. It will probably be in the thousands.

Skin in the game

Last month, the Defense Department released a request for prototype proposals for space-based interceptors (SBIs). The Space Force said it plans to sign agreements with multiple companies to develop and demonstrate SBIs and compete for prizes. This is an unusual procurement strategy for the Pentagon, requiring contractors to spend their own money on building and launching the SBIs into space, with the hope of eventually winning a lucrative production contract.

Apex is one of the companies posturing for an SBI contract. Based in Los Angeles, Apex is one of several US startups looking to manufacture satellites faster and cheaper than traditional aerospace contractors. The company’s vision is to rapidly churn out satellite buses, essentially the spacecraft’s chassis, to be integrated with a customer’s payloads. So far, Apex has raised more than $500 million from investors and launched its first satellite in 2024, just two years after the company’s founding. Apex won a $46 million contract from the Space Force in February to supply the military with an unspecified number of satellites through 2032.

Apex says its satellites can perform a range of missions: remote sensing and Earth observation, communications, AI-powered edge processing, and technology demos. The largest platform in Apex’s portfolio can accommodate payloads of up to 500 kilograms (1,100 pounds), with enough power to support direct-to-cell connectivity and government surveillance missions.

A look inside Apex’s satellite factory in Los Angeles. Credit: Apex

Now, Apex wants to show its satellite design can serve as an orbiting weapons platform.

“Apex is built to move fast, and that is exactly what America and our allies need to ensure we win the New Space Race,” Ian Cinnamon, the company’s co-founder and CEO, said in a statement Wednesday. “In under a year, we are launching the host platform for space-based interceptors, called an Orbital Magazine, which will deploy multiple prototype missile interceptors in orbit.”

The demonstration mission is called Project Shadow. It’s intended to “prove that an operational SBI constellation can be deployed in the timeframe our country needs,” Cinnamon said. “Apex isn’t waiting for handouts or contracts; we are developing this Orbital Magazine technology on our own dime and moving incredibly fast.”

Star Wars redux

Just one week into his second term in the White House, President Donald Trump signed an executive order for what would soon be named Golden Dome, citing an imperative to defend the United States against ballistic missiles and emerging weapons systems like hypersonic glide vehicles and drones.

The Trump administration said in May that the defense shield would cost $175 billion over the next three years. Most analysts peg the long-term cost much higher, but no one really knows. The Pentagon hasn’t released a detailed architecture for what Golden Dome will actually entail, and the uncertainty has driven independent cost estimates ranging from $500 billion to more than $3 trillion.

Golden Dome’s unknown costs, lack of definition, and its unpredictable effect on strategic stability have garnered criticism from Democratic lawmakers.

But unlike the reaction to the Reagan-era Star Wars program, there’s not much pushback on Golden Dome’s technical viability.

“All of the pieces that are required to make it viable exist. They’re out there,” Cinnamon told Ars. “We have satellites, we have boosters, we have seekers, we have fire control, we have IFTUs (in-flight target updates), we have inter-satellite links. The key is, all those pieces need to talk to each other and actually come together, and that integration is really, really difficult. The second key is, in order for it to be viable, you need enough of them in space to actually have the impact that you need.”

This frame from an Apex animation shows a space-based interceptor deploying from an Orbital Magazine.

Apex says its Project Shadow demo is scheduled to launch in June 2026. Once in orbit, the Project Shadow spacecraft will deploy two interceptors, each firing a high-thrust solid rocket motor from a third-party supplier. “The Orbital Magazine will prove its ability to environmentally control the interceptors, issue a fire control command, and close an in-space cross-link to send real-time updates post-deployment,” Apex said in a statement.

The Orbital Magazine on Apex’s drawing board could eventually carry more than 11,000 pounds (5,000 kilograms) of interceptor payload, the company said. “Orbital Magazines host one or many interceptors, allowing thousands of SBIs to be staged in orbit.”‍

Apex is spending about $15 million of its own money on Project Shadow. Cinnamon said Apex is working with other companies on “key parts of the interceptor and mission analysis” for Project Shadow, but he wasn’t ready to identify them yet. One possible propulsion supplier is Anduril Industries, the weapons company started by Oculus founder Palmer Luckey in 2017. Apex and Anduril have worked together before.

“What we’re very good at is high-rate manufacturing and piecing it together,” Cinnamon said. “We have suppliers for everything else.”

Apex is the first company to publicly disclose any details for an SBI demonstration, but it won’t be the last. Cinnamon said Apex will provide further updates on Project Shadow as it nears launch.

“We’re talking about it publicly because I believe it’s really important to inspire both the US and our allies, and show the pace of innovation and show what’s possible in today’s world,” Cinnamon said. “We are very fortunate to have an amazing team, a very large war chest of capital, and the ability to go do a project like this, truly for the good of the US and the good of our allies.”

A solid rocket motor designed for the ascent vehicle for NASA’s Mars Sample Return mission was test-fired by Northrop Grumman in 2023. A similar rocket motor could be used for space-based interceptors. Credit: NASA

The usual suspects

Apex will have a lot of competition vying for a slice of Golden Dome. America’s largest defense contractors have all signaled their interest in tapping into Golden Dome cash flows.

Lockheed Martin has submitted proposals to the Pentagon for space-based interceptors, the company’s CEO, James Taiclet, said Tuesday in a quarterly earnings call.

“We’re actually planning for a real on-orbit, space-based interceptor demonstration by 2028,” Taiclet said, without providing further details. Taiclet said Lockheed Martin is also working on command and control solutions for Golden Dome.

“At the same time, we’re rapidly increasing production capacity across the missiles, sensors, battle management systems, and satellite integration opportunities that will be directly relevant to achieve the overarching objective of Golden Dome,” Taiclet said.

“SBI, the space-based interceptor, is one of those,” he said. “We are building prototypes—full operational prototypes, not things in labs, not stuff on test stands, things that will go into space, or in the air, or fly across a missile range. These are real devices that will work and that can be produced at scale. So the space-based interceptor is one we’ve been pursuing already, and that’s all I can say about that.”

Northrop Grumman officials have made similar statements. Kathy Warden, Northrop’s CEO, has said her company is currently conducting “ground-based tests” of SBI-related technology. She didn’t describe the tests, although Northrop Grumman is the nation’s top supplier of solid rocket motors, a key piece of space-based interceptors, and regularly fires them on test stands.

“The architecture and spend plan for Golden Dome are not published, so I won’t comment on those specifically,” Warden said Tuesday. “We are providing some high-fidelity operational analysis that can help the customer understand those requirements, as well as ourselves.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

California startup to demonstrate space weapon on its own dime Read More »

google-has-a-useful-quantum-algorithm-that-outperforms-a-supercomputer

Google has a useful quantum algorithm that outperforms a supercomputer


An approach it calls “quantum echoes” takes 13,000 times longer on a supercomputer.

Image of a silvery plate labeled with

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

The work relied on Google’s current-generation quantum hardware, the Willow chip. Credit: Google

A few years back, Google made waves when it claimed that some of its hardware had achieved quantum supremacy, performing operations that would be effectively impossible to simulate on a classical computer. That claim didn’t hold up especially well, as mathematicians later developed methods to help classical computers catch up, leading the company to repeat the work on an improved processor.

While this back-and-forth was unfolding, the field became less focused on quantum supremacy and more on two additional measures of success. The first is quantum utility, in which a quantum computer performs computations that are useful in some practical way. The second is quantum advantage, in which a quantum system completes calculations in a fraction of the time it would take a typical computer. (IBM and a startup called Pasqual have published a useful discussion about what would be required to verifiably demonstrate a quantum advantage.)

Today, Google and a large collection of academic collaborators are publishing a paper describing a computational approach that demonstrates a quantum advantage compared to current algorithms—and may actually help us achieve something useful.

Out of time

Google’s latest effort centers on something it’s calling “quantum echoes.” The approach could be described as a series of operations on the hardware qubits that make up its machine. These qubits hold a single bit of quantum information in a superposition between two values, with probabilities of finding the qubit in one value or the other when it’s measured. Each qubit is entangled with its neighbors, allowing its probability to influence those of all the qubits around it. The operations that allow computation, called gates, are ways of manipulating these probabilities. Most current hardware, including Google’s, perform manipulations on one or two qubits at a time (termed one- and two-qubit gates, respectively.

For quantum echoes, the operations involved performing a set of two-qubit gates, altering the state of the system, and later performing the reverse set of gates. On its own, this would return the system to its original state. But for quantum echoes, Google inserts single-qubit gates performed with a randomized parameter. This alters the state of the system before the reverse operations take place, ensuring that the system won’t return to exactly where it started. That explains the “echoes” portion of the name: You’re sending an imperfect copy back toward where things began, much like an echo involves the imperfect reversal of sound waves.

That’s what the process looks like in terms of operations performed on the quantum hardware. But it’s probably more informative to think of it in terms of a quantum system’s behavior. As Google’s Tim O’Brien explained, “You evolve the system forward in time, then you apply a small butterfly perturbation, and then you evolve the system backward in time.” The forward evolution is the first set of two qubit gates, the small perturbation is the randomized one qubit gate, and the second set of two qubit gates is the equivalent of sending the system backward in time.

Because this is a quantum system, however, strange things happen. “On a quantum computer, these forward and backward evolutions, they interfere with each other,” O’Brien said. One way to think about that interference is in terms of probabilities. The system has multiple paths between its start point and the point of reflection—where it goes from evolving forward in time to evolving backward—and from that reflection point back to a final state. Each of those paths has a probability associated with it. And since we’re talking about quantum mechanics, those paths can interfere with each other, increasing some probabilities at the expense of others. That interference ultimately determines where the system ends up.

(Technically, these are termed “out of time order correlations,” or OTOCs. If you read the Nature paper describing this work, prepare to see that term a lot.)

Demonstrating advantage

So how do you turn quantum echoes into an algorithm? On its own, a single “echo” can’t tell you much about the system—the probabilities ensure that any two runs might show different behaviors. But if you repeat the operations multiple times, you can begin to understand the details of this quantum interference. And performing the operations on a quantum computer ensures that it’s easy to simply rerun the operations with different random one-qubit gates and get many instances of the initial and final states—and thus a sense of the probability distributions involved.

This is also where Google’s quantum advantage comes from. Everyone involved agrees that the precise behavior of a quantum echo of moderate complexity can be modeled using any leading supercomputer. But doing so is very time-consuming, so repeating those simulations a few times becomes unrealistic. The paper estimates that a measurement that took its quantum computer 2.1 hours to perform would take the Frontier supercomputer approximately 3.2 years. Unless someone devises a far better classical algorithm than what we have today, this represents a pretty solid quantum advantage.

But is it a useful algorithm? The repeated sampling can act a bit like the Monte Carlo sampling done to explore the behavior of a wide variety of physical systems. Typically, however, we don’t view algorithms as modeling the behavior of the underlying hardware they’re being run on; instead, they’re meant to model some other physical system we’re interested in. That’s where Google’s announcement stands apart from its earlier work—the company believes it has identified an interesting real-world physical system with behaviors that the quantum echoes can help us understand.

That system is a small molecule in a Nuclear Magnetic Resonance (NMR) machine. In a second draft paper being published on the arXiv later today, Google has collaborated with a large collection of NMR experts to explore that use.

From computers to molecules

NMR is based on the fact that the nucleus of every atom has a quantum property called spin. When nuclei are held near to each other, such as when they’re in the same molecule, these spins can influence one another. NMR uses magnetic fields and photons to manipulate these spins and can be used to infer structural details, like how far apart two given atoms are. But as molecules get larger, these spin networks can extend for greater distances and become increasingly complicated to model. So NMR has been limited to focusing on the interactions of relatively nearby spins.

For this work, though, the researchers figured out how to use an NMR machine to create the physical equivalent of a quantum echo in a molecule. The work involved synthesizing the molecule with a specific isotope of carbon (carbon-13) in a known location in the molecule. That isotope could be used as the source of a signal that propagates through the network of spins formed by the molecule’s atoms.

“The OTOC experiment is based on a many-body echo, in which polarization initially localized on a target spin migrates through the spin network, before a Hamiltonian-engineered time-reversal refocuses to the initial state,” the team wrote. “This refocusing is sensitive to perturbations on distant butterfly spins, which allows one to measure the extent of polarization propagation through the spin network.”

Naturally, something this complicated needed a catchy nickname. The team came up with TARDIS, or Time-Accurate Reversal of Dipolar InteractionS. While that name captures the “out of time order” aspect of OTOC, it’s simply a set of control pulses sent to the NMR sample that starts a perturbation of the molecule’s network of nuclear spins. A second set of pulses then reflects an echo back to the source.

The reflections that return are imperfect, with noise coming from two sources. The first is simply imperfections in the control sequence, a limitation of the NMR hardware. But the second is the influence of fluctuations happening in distant atoms along the spin network. These happen at a certain frequency at random, or the researchers could insert a fluctuation by targeting a specific part of the molecule with randomized control signals.

The influence of what’s going on in these distant spins could allow us to use quantum echoes to tease out structural information at greater distances than we currently do with NMR. But to do so, we need an accurate model of how the echoes will propagate through the molecule. And again, that’s difficult to do with classical computations. But it’s very much within the capabilities of quantum computing, which the paper demonstrates.

Where things stand

For now, the team stuck to demonstrations on very simple molecules, making this work mostly a proof of concept. But the researchers are optimistic that there are many ways the system could be used to extract structural information from molecules at distances that are currently unobtainable using NMR. They list a lot of potential upsides that should be explored in the discussion of the paper, and there are plenty of smart people who would love to find new ways of using their NMR machines, so the field is likely to figure out pretty quickly which of these approaches turns out to be practically useful.

The fact that the demonstrations were done with small molecules, however, means that the modeling run on the quantum computer could also have been done on classical hardware (it only required 15 hardware qubits). So Google is claiming both quantum advantage and quantum utility, but not at the same time. The sorts of complex, long-distance interactions that would be out of range of classical simulation are still a bit beyond the reach of the current quantum hardware. O’Brien estimated that the hardware’s fidelity would have to improve by a factor of three or four to model molecules that are beyond classical simulation.

The quantum advantage issue should also be seen as a work in progress. Google has collaborated with enough researchers at enough institutions that there’s unlikely to be a major improvement in algorithms that could allow classical computers to catch up. Until the community as a whole has some time to digest the announcement, though, we shouldn’t take that as a given.

The other issue is verifiability. Some quantum algorithms will produce results that can be easily verified on classical hardware—situations where it’s hard to calculate the right result but easy to confirm a correct answer. Quantum echoes isn’t one of those, so we’ll need another quantum computer to verify the behavior Google has described.

But Google told Ars nothing is up to the task yet. “No other quantum processor currently matches both the error rates and number of qubits of our system, so our quantum computer is the only one capable of doing this at present,” the company said. (For context, Google says that the algorithm was run on up to 65 qubits, but the chip has 105 qubits total.)

There’s a good chance that other companies would disagree with that contention, but it hasn’t been possible to ask them ahead of the paper’s release.

In any case, even if this claim proves controversial, Google’s Michel Devoret, a recent Nobel winner, hinted that we shouldn’t have long to wait for additional ones. “We have other algorithms in the pipeline, so we will hopefully see other interesting quantum algorithms,” Devoret said.

Nature, 2025. DOI: 10.1038/s41586-025-09526-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Google has a useful quantum algorithm that outperforms a supercomputer Read More »

even-with-protections,-wolves-still-fear-humans

Even with protections, wolves still fear humans

This quickly became an issue, at least for some people. Mieczysław Kacprzak, an MP from Poland’s PSL Party, currently in the ruling coalition, addressed the parliament in December 2017, saying that wolves were roaming suburban roads and streets, terrorizing citizens—in his view, a tragedy waiting to happen. He also said children were afraid to go to school because of wolves and asked for support from the Ministry of Agriculture, which could lift the ban on hunting. An article in “Łowczy Polski,” a journal of the Polish hunting community with a title that translates as “The Polish Huntsman,” later backed these pro-hunting arguments, claiming wolves were a threat to humans, especially children.

The idea was that wolves, in the absence of hunting, ceased to perceive humans as a threat and felt encouraged to approach them. But it was an idea that was largely supported by anecdote. “We found this was not the case,” says Liana Zanette, a biologist at Western University and co-author of the study.

Super predators

To figure out if wolves really were no longer afraid of humans, Zanette, Clinchy, and their colleagues set up 24 camera traps in the Tuchola Forest. “Our Polish colleagues and co-authors, especially Maciej Szewczyk, helped us set those traps in places where we were most likely to find wolves,” Zanette says. “Maciej was literally saying ‘pick this tree,’ or ‘this crossroads.’” When sensors in the traps detected an animal nearby, the system took a photo and played one of three sounds, chosen at random.

The first sound was chirping birds, which the team used as a control. “We chose birds because this is a typical part of forest soundscape and we assumed wolves would not find this threatening,” Clinchy says. The next sound was barking dogs. The team picked this one because a dog is another large carnivore living in the same ecosystem, so it was expected to scare wolves. The third sound was just people talking calmly in Polish. Zanette, Clinchy, and their colleagues quantified the level of fear each sound caused in wolves by measuring how quickly they vacated the area upon hearing it.

Even with protections, wolves still fear humans Read More »

dead-ends-is-a-fun,-macabre-medical-history-for-kids

Dead Ends is a fun, macabre medical history for kids


flukes, flops, and failures

Ars chats with co-authors Lindsey Fitzharris and Adrian Teal about their delightful new children’s book.

In 1890, a German scientist named Robert Koch thought he’d invented a cure for tuberculosis, a substance derived from the infecting bacterium itself that he dubbed Tuberculin. His substance didn’t actually cure anyone, but it was eventually widely used as a diagnostic skin test. Koch’s successful failure is just one of the many colorful cases featured in Dead Ends! Flukes, Flops, and Failures that Sparked Medical Marvels, a new nonfiction illustrated children’s book by science historian Lindsey Fitzharris and her husband, cartoonist Adrian Teal.

A noted science communicator with a fondness for the medically macabre, Fitzharris published a biography of surgical pioneer Joseph Lister, The Butchering Art, in 2017—a great, if occasionally grisly, read. She followed up with 2022’s  The Facemaker: A Visionary Surgeon’s Battle to Mend the Disfigured Soldiers of World War I, about a WWI surgeon named Harold Gillies who rebuilt the faces of injured soldiers.

And in 2020, she hosted a documentary for the Smithsonian Channel, The Curious Life and Death Of…, exploring famous deaths, ranging from drug lord Pablo Escobar to magician Harry Houdini. Fitzharris performed virtual autopsies, experimented with blood samples, interviewed witnesses, and conducted real-time demonstrations in hopes of gleaning fresh insights. For his part, Teal is a well-known caricaturist and illustrator, best known for his work on the British TV series Spitting Image. His work has also appeared in The Guardian and the Sunday Telegraph, among other outlets.

The couple decided to collaborate on children’s books as a way to combine their respective skills. Granted, “[The market for] children’s nonfiction is very difficult,” Fitzharris told Ars. “It doesn’t sell that well in general. It’s very difficult to get publishers on board with it. It’s such a shame because I really feel that there’s a hunger for it, especially when I see the kids picking up these books and loving it. There’s also just a need for it with the decline in literacy rates. We need to get people more engaged with these topics in ways that go beyond a 30-second clip on TikTok.”

Their first foray into the market was 2023’s Plague-Busters! Medicine’s Battles with History’s Deadliest Diseases, exploring “the ickiest illnesses that have infected humans and affected civilizations through the ages”—as well as the medical breakthroughs that came about to combat those diseases. Dead Ends is something of a sequel, focusing this time on historical diagnoses, experiments, and treatments that were useless at best, frequently harmful, yet eventually led to unexpected medical breakthroughs.

Failure is an option

The book opens with the story of Robert Liston, a 19th-century Scottish surgeon known as “the fastest knife in the West End,” because he could amputate a leg in less than three minutes. That kind of speed was desirable in a period before the discovery of anesthetic, but sometimes Liston’s rapid-fire approach to surgery backfired. One story (possibly apocryphal) holds that Liston accidentally cut off the finger of his assistant in the operating theater as he was switching blades, then accidentally cut the coat of a spectator, who died of fright. The patient and assistant also died, so that operation is now often jokingly described as the only one with a 300 percent mortality rate, per Fitzharris.

Liston is the ideal poster child for the book’s theme of celebrating the role of failure in scientific progress. “I’ve always felt that failure is something we don’t talk about enough in the history of science and medicine,” said Fitzharris. “For everything that’s succeeded there’s hundreds, if not thousands, of things that’s failed. I think it’s a great concept for children. If you think that you’ve made mistakes, look at these great minds from the past. They’ve made some real whoppers. You are in good company. And failure is essential to succeeding, especially in science and medicine.”

“During the COVID pandemic, a lot of people were uncomfortable with the fact that some of the advice would change, but to me that was a comfort because that’s what you want to see scientists and doctors doing,” she continued. “They’re learning more about the virus, they’re changing their advice. They’re adapting. I think that this book is a good reminder of what the scientific process involves.”

The details of Liston’s most infamous case might be horrifying, but as Teal observes, “Comedy equals tragedy plus time.” One of the reasons so many of his patients died was because this was before the broad acceptance of germ theory and Joseph Lister’s pioneering work on antiseptic surgery. Swashbuckling surgeons like Liston prided themselves on operating in coats stiffened with blood—the sign of a busy and hence successful surgeon. Frederick Treves once observed that in the operating room, “cleanliness was out of place. It was considered to be finicking and affected. An executioner might as well manicure his nails before chopping off a head.”

“There’s always a lot of initial resistance to new ideas, even in science and medicine,” said Teal. “A lot of what we talk about is paradigm shifts and the difficulty of achieving [such a shift] when people are entrenched in their thinking. Galen was a hugely influential Roman doctor and got a lot of stuff right, but also got a lot of stuff wrong. People were clinging onto that stuff for centuries. You have misunderstanding compounded by misunderstanding, century after century, until somebody finally comes along and says, ‘Hang on a minute, this is all wrong.’”

You know… for kids

Writing for children proved to be a very different experience for Fitzharris after two adult-skewed science history books. “I initially thought children’s writing would be easy,” she confessed. “But it’s challenging to take these high-level concepts and complex stories about past medical movements and distill them for children in an entertaining and fun way.” She credits Teal—a self-described “man-child”—for taking her drafts and making them more child-friendly.

Teal’s clever, slightly macabre illustrations also helped keep the book accessible to its target audience, appealing to children’s more ghoulish side. “There’s a lot of gruesome stuff in this book,” Teal said. “Obviously it’s for kids, so you don’t want to go over the top, but equally, you don’t want to shy away from those details. I always say kids love it because kids are horrible, in the best possible way. I think adults sometimes worry too much about kids’ sensibilities. You can be a lot more gruesome than you think you can.”

The pair did omit some darker subject matter, such as the history of frontal lobotomies, notably the work of a neuroscientist named Walter Freeman, who operated an actual “lobotomobile.” For the authors, it was all about striking the right balance. “How much do you give to the kids to keep them engaged and interested, but not for it to be scary?” said Fitzharris. “We don’t want to turn people off from science and medicine. We want to celebrate the greatness of what we’ve achieved scientifically and medically. But we also don’t want to cover up the bad bits because that is part of the process, and it needs to be acknowledged.”

Sometimes Teal felt it just wasn’t necessary to illustrate certain gruesome details in the text—such as their discussion of the infamous case of Phineas Gage. Gage was a railroad construction foreman. In 1848, he was overseeing a rock blasting team when an explosion drove a three-foot tamping iron through his skull. “There’s a horrible moment when [Gage] leans forward and part of his brain drops out,” said Teal. “I’m not going to draw that, and I don’t need to, because it’s explicit in the text. If we’ve done a good enough job of writing something, that will put a mental picture in someone’s head.”

Miraculously, Gage survived, although there were extreme changes in his behavior and personality, and his injuries eventually caused epileptic seizures, one of which killed Gage in 1860. Gage became the index case for personality changes due to frontal lobe damage, and 50 years after his death, the case inspired neurologist David Ferrier to create brain maps based on his research into whether certain areas of the brain controlled specific cognitive functions.

“Sometimes it takes a beat before we get there,” said Fitzharris. “Science builds upon ideas, and it can take time. In the age of looking for instantaneous solutions, I think it’s important to remember that research needs to allow itself to do what it needs to do. It shouldn’t just be guided by an end goal. Some of the best discoveries that were made had no end goal in mind. And if you read Dead Ends, you’re going to be very happy that you live in 2025. Medically speaking, this is the best time. That’s really what Dead Ends is about. It’s a celebration of how far we’ve come.”

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Dead Ends is a fun, macabre medical history for kids Read More »

nasa’s-next-moonship-reaches-last-stop-before-launch-pad

NASA’s next Moonship reaches last stop before launch pad

The Orion spacecraft, which will fly four people around the Moon, arrived inside the cavernous Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida late Thursday night, ready to be stacked on top of its rocket for launch early next year.

The late-night transfer covered about 6 miles (10 kilometers) from one facility to another at the Florida spaceport. NASA and its contractors are continuing preparations for the Artemis II mission after the White House approved the program as an exception to work through the ongoing government shutdown, which began on October 1.

The sustained work could set up Artemis II for a launch opportunity as soon as February 5 of next year. Astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen will be the first humans to fly on the Orion spacecraft, a vehicle that has been in development for nearly two decades. The Artemis II crew will make history on their 10-day flight by becoming the first people to travel to the vicinity of the Moon since 1972.

Where things stand

The Orion spacecraft, developed by Lockheed Martin, has made several stops at Kennedy over the last few months since leaving its factory in May.

First, the capsule moved to a fueling facility, where technicians filled it with hydrazine and nitrogen tetroxide propellants, which will feed Orion’s main engine and maneuvering thrusters on the flight to the Moon and back. In the same facility, teams loaded high-pressure helium and ammonia coolant into Orion propulsion and thermal control systems.

The next stop was a nearby building where the Launch Abort System was installed on the Orion spacecraft. The tower-like abort system would pull the capsule away from its rocket in the event of a launch failure. Orion stands roughly 67 feet (20 meters) tall with its service module, crew module, and abort tower integrated together.

Teams at Kennedy also installed four ogive panels to serve as an aerodynamic shield over the Orion crew capsule during the first few minutes of launch.

The Orion spacecraft, with its Launch Abort System and ogive panels installed, is seen last month inside the Launch Abort System Facility at Kennedy Space Center, Florida. Credit: NASA/Frank Michaux

It was then time to move Orion to the Vehicle Assembly Building (VAB), where a separate team has worked all year to stack the elements of NASA’s Space Launch System rocket. In the coming days, cranes will lift the spacecraft, weighing 78,000 pounds (35 metric tons), dozens of stories above the VAB’s center aisle, then up and over the transom into the building’s northeast high bay to be lowered atop the SLS heavy-lift rocket.

NASA’s next Moonship reaches last stop before launch pad Read More »

lead-poisoning-has-been-a-feature-of-our-evolution

Lead poisoning has been a feature of our evolution


A recent study found lead in teeth from 2 million-year-old hominin fossils.

Our hominid ancestors faced a Pleistocene world full of dangers—and apparently one of those dangers was lead poisoning.

Lead exposure sounds like a modern problem, at least if you define “modern” the way a paleoanthropologist might: a time that started a few thousand years ago with ancient Roman silver smelting and lead pipes. According to a recent study, however, lead is a much more ancient nemesis, one that predates not just the Romans but the existence of our genus Homo. Paleoanthropologist Renaud Joannes-Boyau of Australia’s Southern Cross University and his colleagues found evidence of exposure to dangerous amounts of lead in the teeth of fossil apes and hominins dating back almost 2 million years. And somewhat controversially, they suggest that the toxic element’s pervasiveness may have helped shape our evolutionary history.

The skull of an early hominid, aged to a dark brown color. The skull is fragmentary, but the fragments are held in the appropriate locations by an underlying beige material.

The skull of an early hominid. Credit: Einsamer Schütze / Wikimedia

The Romans didn’t invent lead poisoning

Joannes-Boyau and his colleagues took tiny samples of preserved enamel and dentin from the teeth of 51 fossils. In most of those teeth, the paleoanthropologists found evidence that these apes and hominins had been exposed to lead—sometimes in dangerous quantities—fairly often during their early years.

Tooth enamel forms in thin layers, a little like tree rings, during the first six or so years of a person’s life. The teeth in your mouth right now (and of which you are now uncomfortably aware; you’re welcome) are a chemical and physical record of your childhood health—including, perhaps, whether you liked to snack on lead paint chips. Bands of lead-tainted tooth enamel suggest that a person had a lot of lead in their bloodstream during the year that layer of enamel was forming (in this case, “a lot” means an amount measurable in parts per million).

In 71 percent of the hominin teeth that Joannes-Boyau and his colleagues sampled, dark bands of lead in the tooth enamel showed “clear signs of episodic lead exposure” during the crucial early childhood years. Those included teeth from 100,000-year-old members of our own species found in China and 250,000-year-old French Neanderthals. They also included much earlier hominins who lived between 1 and 2 million years ago in South Africa: early members of our genus Homo, along with our relatives Australopithecus africanus and Paranthropus robustus. Lead exposure, it turns out, is a very ancient problem.

Living in a dangerous world

This study isn’t the first evidence that ancient hominins dealt with lead in their environments. Two Neanderthals living 250,000 years ago in France experienced lead exposure as young children, according to a 2018 study. At the time, they were the oldest known examples of lead exposure (and they’re included in Joannes-Boyau and his colleagues’ recent study).

Until a few thousand years ago, no one was smelting silver, plumbing bathhouses, or releasing lead fumes in car exhaust. So how were our hominin ancestors exposed to the toxic element? Another study, published in 2015, showed that the Spanish caves occupied by other groups of Neanderthals contained enough heavy metals, including lead, to “meet the present-day standards of ‘contaminated soil.’”

Today, we mostly think of lead in terms of human-made pollution, so it’s easy to forget that it’s also found naturally in bedrock and soil. If that weren’t the case, archaeologists couldn’t use lead isotope ratios to tell where certain artifacts were made. And some places—and some types of rock—have higher lead concentrations than others. Several common minerals contain lead compounds, including galena or lead sulfide. And the kind of lead exposure documented in Joannes-Boyau and his colleagues’ study would have happened at an age when little hominins were very prone to putting rocks, cave dirt, and other random objects in their mouths.

Some of the fossils from the Queque cave system in China, which included a 1.8 million-year-old extinct gorilla-like ape called Gigantopithecus blacki, had lead levels higher than 50 parts per million, which Joannes-Boyau and his colleagues describe as “a substantial level of lead that could have triggered some developmental, health, and perhaps social impairments.”

Even for ancient hominins who weren’t living in caves full of lead-rich minerals, wildfires, or volcanic eruptions can also release lead particles into the air, and erosion or flooding can sweep buried lead-rich rock or sediment into water sources. If you’re an Australopithecine living upstream of a lead-rich mica outcropping, for example, erosion might sprinkle poison into your drinking water—or the drinking water of the gazelle you eat or the root system of the bush you get those tasty berries from… .

Our world is full of poisons. Modern humans may have made a habit of digging them up and pumping them into the air, but they’ve always been lying in wait for the unwary.

screenshot from the app

Cubic crystals of the lead-sulfide mineral galena.

Digging into the details

Joannes-Boyau and his colleagues sampled the teeth of several hominin species from South Africa, all unearthed from cave systems just a few kilometers apart. All of them walked the area known as Cradle of Humankind within a few hundred thousand years of each other (at most), and they would have shared a very similar environment. But they also would have had very different diets and ways of life, and that’s reflected in their wildly different exposures to lead.

A. africanus had the highest exposure levels, while P. robustus had signs of infrequent, very slight exposures (with Homo somewhere in between the two). Joannes-Boyau and his colleagues chalk the difference up to the species’ different diets and ecological niches.

“The different patterns of lead exposure could suggest that P. robustus lead bands were the result of acute exposure (e.g., wild forest fire),” Joannes-Boyau and his colleagues wrote, “while for the other two species, known to have a more varied diet, lead bands may be due to more frequent, seasonal, and higher lead concentration through bioaccumulation processes in the food chain.”

Did lead exposure affect our evolution?

Given their evidence that humans and their ancestors have regularly been exposed to lead, the team looked into whether this might have influenced human evolution. In doing so, they focused on a gene called NOVA1, which has been linked to both brain development and the response to lead exposure. The results were quite a bit short of decisive; you can think of things as remaining within the realm of a provocative hypothesis.

The NOVA1 gene encodes a protein that influences the processing of messenger RNAs, allowing it to control the production of closely related variants of a single gene. It’s notable for a number of reasons. One is its role in brain development; mice without a working copy of NOVA1 die shortly after birth due to defects in muscle control. Its activity is also altered following exposure to lead.

But perhaps its most interesting feature is that modern humans have a version of the gene that differs by a single amino acid from the version found in all other primates, including our closest relatives, the Denisovans and Neanderthals. This raises the prospect that the difference is significant from an evolutionary perspective. Altering the mouse version so that it is identical to the one found in modern humans does alter the vocal behavior of these mice.

But work with human stem cells has produced mixed results. One group, led by one of the researchers involved in this work, suggested that stem cells carrying the ancestral form of the protein behaved differently from those carrying the modern human version. But others have been unable to replicate those results.

Regardless of that bit of confusion, the researchers used the same system, culturing stem cells with the modern human and ancestral versions of the protein. These clusters of cells (called organoids) were grown in media containing two different concentrations of lead, and changes in gene activity and protein production were examined. The researchers found changes, but the significance isn’t entirely clear. There were differences between the cells with the two versions of the gene, even without any lead present. Adding lead could produce additional changes, but some of those were partially reversed if more lead was added. And none of those changes were clearly related either to a response to lead or the developmental defects it can produce.

The relevance of these changes isn’t obvious, either, as stem cell cultures tend to reflect early neural development while the lead exposure found in the fossilized remains is due to exposure during the first few years of life.

So there isn’t any clear evidence that the variant found in modern humans protects individuals who are exposed to lead, much less that it was selected by evolution for that function. And given the widespread exposure seen in this work, it seems like all of our relatives—including some we know modern humans interbred with—would also have benefited from this variant if it was protective.

Science Advances, 2025. DOI: 10.1126/sciadv.adr1524  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Lead poisoning has been a feature of our evolution Read More »

spacex-has-plans-to-launch-falcon-heavy-from-california—if-anyone-wants-it-to

SpaceX has plans to launch Falcon Heavy from California—if anyone wants it to

There’s more to the changes at Vandenberg than launching additional rockets. The authorization gives SpaceX the green light to redevelop Space Launch Complex 6 (SLC-6) to support Falcon 9 and Falcon Heavy missions. SpaceX plans to demolish unneeded structures at SLC-6 (pronounced “Slick 6”) and construct two new landing pads for Falcon boosters on a bluff overlooking the Pacific just south of the pad.

SpaceX currently operates from a single pad at Vandenberg—Space Launch Complex 4-East (SLC-4E)—a few miles north of the SLC-6 location. The SLC-4E location is not configured to launch the Falcon Heavy, an uprated rocket with three Falcon 9 boosters bolted together.

SLC-6, cocooned by hills on three sides and flanked by the ocean to the west, is no stranger to big rockets. It was first developed for the Air Force’s Manned Orbiting Laboratory program in the 1960s, when the military wanted to put a mini-space station into orbit for astronauts to spy on the Soviet Union. Crews readied the complex to launch military astronauts on top of Titan rockets, but the Pentagon canceled the program in 1969 before anything actually launched from SLC-6.

NASA and the Air Force then modified SLC-6 to launch space shuttles. The space shuttle Enterprise was stacked vertically at SLC-6 for fit checks in 1985, but the Air Force abandoned the Vandenberg-based shuttle program after the Challenger accident in 1986. The launch facility sat mostly dormant for nearly two decades until Boeing, and then United Launch Alliance, took over SLC-6 and began launching Delta IV rockets there in 2006.

The space shuttle Enterprise stands vertically at Space Launch Complex-6 at Vandenberg. NASA used the shuttle for fit checks at the pad, but it never launched from California. Credit: NASA

ULA launched its last Delta IV Heavy rocket from California in 2022, leaving the future of SLC-6 in question. ULA’s new rocket, the Vulcan, will launch from a different pad at Vandenberg. Space Force officials selected SpaceX in 2023 to take over the pad and prepare it to launch the Falcon Heavy, which has the lift capacity to carry the military’s most massive satellites into orbit.

No big rush

Progress at SLC-6 has been slow. It took nearly a year to prepare the Environmental Impact Statement. In reality, there’s no big rush to bring SLC-6 online. SpaceX has no Falcon Heavy missions from Vandenberg in its contract backlog, but the company is part of the Pentagon’s stable of launch providers. To qualify as a member of the club, SpaceX must have the capability to launch the Space Force’s heaviest missions from the military’s spaceports at Vandenberg and Cape Canaveral, Florida.

SpaceX has plans to launch Falcon Heavy from California—if anyone wants it to Read More »

antarctica-is-starting-to-look-a-lot-like-greenland—and-that-isn’t-good

Antarctica is starting to look a lot like Greenland—and that isn’t good


Global warming is awakening sleeping giants of ice at the South Pole.

A view of the Shoesmith Glacier on Horseshoe Island on Feb. 21. Credit: Sebnem Coskun/Anadolu via Getty Images

As recently as the 1990s, when the Greenland Ice Sheet and the rest of the Arctic region were measurably thawing under the climatic blowtorch of human-caused global warming, most of Antarctica’s vast ice cap still seemed securely frozen.

But not anymore. Physics is physics. As the planet heats up, more ice will melt at both poles, and recent research shows that Antarctica’s ice caps, glaciers, and floating ice shelves, as well as its sea ice, are just as vulnerable to warming as the Arctic.

Both satellite data and field observations in Antarctica reveal alarming signs of a Greenland-like meltdown, with increased surface melting of the ice fields, faster-moving glaciers, and dwindling sea ice. Some scientists are sounding the alarm, warning that the rapid “Greenlandification” of Antarctica will have serious consequences, including an accelerated rise in sea levels and significant shifts in rainfall and drought patterns.

The Antarctic ice sheet covers about 5.4 million square miles, an area larger than Europe. On average, it is more than 1 mile thick and holds 61 percent of all the fresh water on Earth, enough to raise the global average sea level by about 190 feet if it all melts. The smaller, western portion of the ice sheet is especially vulnerable, with enough ice to raise sea level more than 10 feet.

Thirty years ago, undergraduate students were told that the Antarctic ice sheets were going to be stable and that they weren’t going to melt much, said Ruth Mottram, an ice researcher with the Danish Meteorological Institute and lead author of a new paper in Nature Geoscience that examined the accelerating ice melt and other similarities between changes in northern and southern polar regions.

“We thought it was just going to take ages for any kind of climate impacts to be seen in Antarctica. And that’s really not true,” said Mottram, adding that some of the earliest warnings came from scientists who saw collapsing ice shelves, retreating glaciers, and increased surface melting in satellite data.

One of the early warning signs was the rapid collapse of an ice shelf along the narrow Antarctic Peninsula, which extends northward toward the tip of South America, said Helen Amanda Fricker, a geophysics professor with the Scripps Institute of Oceanography Polar Center at the University of California, San Diego.

Chunks of sea ice on the shore

Stranded remnants of sea ice along the Antarctic Peninsula are a reminder that much of the ice on the frozen continent around the South Pole is just as vulnerable to global warming as Arctic ice, where a long-term meltdown is well underway.

Credit: Bob Berwyn/Inside Climate News

Stranded remnants of sea ice along the Antarctic Peninsula are a reminder that much of the ice on the frozen continent around the South Pole is just as vulnerable to global warming as Arctic ice, where a long-term meltdown is well underway. Credit: Bob Berwyn/Inside Climate News

After a string of record-warm summers riddled the floating Rhode Island-sized slab of ice with cracks and meltwater ponds, it crumbled almost overnight. The thick, ancient ice dam was gone, and the seven major outlet glaciers behind it accelerated toward the ocean, raising sea levels as their ice melted.

“The Larsen B ice shelf collapse in 2002 was a staggering event in our community,” said Fricker, who was not an author of the new paper. “We just couldn’t believe the pace at which it happened, within six weeks. Basically, the ice shelves are there and then, boom, boom, boom, a series of melt streams and melt ponds. And then the whole thing collapsed, smattered into smithereens.”

Glaciologists never thought that events would happen that quickly in Antarctica, she said.

Same physics, same changes

Fricker said glaciologists thought of changes in Antarctica on millennial timescales, but the ice shelf collapse showed that extreme warming can lead to much more rapid change.

Current research focuses on the edges of Antarctica, where floating sea ice and relatively narrow outlet glaciers slow the flow of the ice cap toward the sea. She described the Antarctic Ice Sheet as a giant ice reservoir contained by a series of dams.

“If humans had built those containment structures,” she said, “we would think that they weren’t very adequate. We are relying on those dams to hold back all of that ice, but the dams are weakening all around Antarctica and releasing more ice into the ocean.”

Satellite view of ice cap coverage

A comparison of the average concentration of Antarctic sea ice.

Credit: NASA Earth Observatory

A comparison of the average concentration of Antarctic sea ice. Credit: NASA Earth Observatory

Credit: NASA Earth Observatory

The amount of ice that’s entered the ocean has increased fourfold since the 1990s, and she said, “We’re on the cusp of it becoming a really big number… because at some point, there’s no stopping it anymore.”

The Antarctic Ice Sheet is often divided into three sectors: the East Antarctic Ice Sheet, the largest and thickest; the West Antarctic Ice Sheet; and the Antarctic Peninsula, which is deemed the most vulnerable to thawing and melting.

Mottram, the new paper’s lead author, said a 2022 heatwave that penetrated to the coldest interior part of the East Antarctic Ice Sheet may be another sign that the continent is not as isolated from the rest of the global climate system as once thought. The extraordinary 2022 heatwave was driven by an atmospheric river, or a concentrated stream of moisture-laden air. Ongoing research “shows that there’s been an increase in the number of atmospheric rivers and an increase in their intensity,” she said.

Antarctica is also encircled by a powerful circumpolar ocean current that has prevented the Southern Ocean from warming as quickly as other ocean regions. But recent climate models and observations show the buffer is breaking down and that relatively warmer waters are starting to reach the base of the ice shelves, she said.

New maps detailing winds in the region show that “swirls of air from higher latitudes are dragging in all the time, so it’s not nearly as isolated as we were always told when we were students,” she said.

Ice researcher Eric Rignot, an Earth system science professor at the University of California, Irvine, who did not contribute to the new paper, said via email that recent research on Antarctica’s floating ice shelves emphasizes the importance of how the oceans and ice interact, a process that wasn’t studied very closely in early Greenland research. And Greenland shows what will happen to Antarctic glaciers in a warmer climate with more surface melt and more intense ice-ocean interactions, he added.

“We learn from both but stating that one is becoming the other is an oversimplification,” he said. “There is no new physics in Greenland that does not apply to Antarctica and vice versa.”

Rignot said the analogy between the two regions also partly breaks down because Greenland is warming up at two to three times the global average, “which has triggered a slowing of the jet stream,” with bigger wobbles and “weird weather patterns” in the Northern Hemisphere.

Antarctica is warming slightly less than the global average rate, according to a 2025 study, and the Southern Hemisphere jet stream is strengthening and tightening toward the South Pole, “behaving completely opposite,” he said.

Mottram said her new paper aims to help people understand that Antarctica is not as remote or isolated as often portrayed, and that what happens there will affect the rest of the global climate system.

“It’s not just this place far away that nobody goes to and nobody understands,” she said. “We actually understand quite a lot of what’s going on there. And so I also hope that it drives more urgency to decarbonize, because it’s very clear that the only way we’re going to get out of this problem is bringing our greenhouse gases down as much as possible, as soon as possible.”

This story originally appeared on Inside Climate News.

Photo of Inside Climate News

Antarctica is starting to look a lot like Greenland—and that isn’t good Read More »

rice-weevil-on-a-grain-of-rice-wins-2025-nikon-small-world-contest

Rice weevil on a grain of rice wins 2025 Nikon Small World contest

A stunning image of a rice weevil on a single grain of rice has won the 2025 Nikon Small World photomicrography contest, yielding valuable insight into the structure and behavior of—and providing a fresh perspective on—this well-known agricultural pest. The image was taken by Zhang You of Yunnan, China. Another of You’s photographs placed 15th in this year’s contest.

“It pays to dive deep into entomology: understanding insects’ behaviors and mastering lighting,” You said in a statement. “A standout work blends artistry with scientific rigor, capturing the very essence, energy, and spirit of these creatures.”

There was an element of luck in creating his winning image, too. “I had observed rice weevils in grains before, but never one with its wings spread,” You said. “This one was naturally preserved on a windowsill, perhaps in a final attempt to escape. Its tiny size makes manually preparing spread-wing specimens extremely difficult, so encountering it was both serendipitous and inspiring.”

Nikon’s annual contest was founded in 1974 “to showcase the beauty and complexity of things seen through the light microscope.” Photomicrography involves attaching a camera to a microscope (either an optical microscope or an electron microscope) so that the user can take photographs of objects at very high resolutions. British physiologist Richard Hill Norris was one of the first to use it for his studies of blood cells in 1850, and the method has increasingly been highlighted as art since the 1970s. There have been many groundbreaking technological advances in the ensuing decades, particularly with the advent of digital imaging methods.

Rice weevil on a grain of rice wins 2025 Nikon Small World contest Read More »

believing-misinformation-is-a-“win”-for-some-people,-even-when-proven-false

Believing misinformation is a “win” for some people, even when proven false

Why people endorse misinformation

Our findings highlight the limits of countering misinformation directly, because for some people, literal truth is not the point.

For example, President Donald Trump incorrectly claimed in August 2025 that crime in Washington, DC, was at an all-time high, generating countless fact-checks of his premise and think pieces about his dissociation from reality.

But we believe that to someone with a symbolic mindset, debunkers merely demonstrate that they’re the ones reacting and are therefore weak. The correct information is easily available but is irrelevant to someone who prioritizes a symbolic show of strength. What matters is signaling one isn’t listening and won’t be swayed.

In fact, for symbolic thinkers, nearly any statement should be justifiable. The more outlandish or easily disproved something is, the more powerful one might seem when standing by it. Being an edgelord—a contrarian online provocateur—or outright lying can, in their own odd way, appear “authentic.”

Some people may also view their favorite dissembler’s claims as provocative trolling, but, given the link between this mindset and authoritarianism, they want those far-fetched claims acted on anyway. The deployment of National Guard troops to Washington, for example, can be the desired end goal, even if the offered justification is a transparent farce.

Is this really 5-D chess?

It is possible that symbolic, but not exactly true, beliefs have some downstream benefit, such as serving as negotiation tactics, loyalty tests, or a fake-it-till-you-make-it long game that somehow, eventually, becomes a reality. Political theorist Murray Edelman, known for his work on political symbolism, noted that politicians often prefer scoring symbolic points over delivering results—it’s easier. Leaders can offer symbolism when they have little tangible to provide.

Randy Stein is associate professor of marketing at California State Polytechnic University, Pomona and Abraham Rutchick is professor of psychology at California State University, Northridge.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Believing misinformation is a “win” for some people, even when proven false Read More »

starship’s-elementary-era-ends-today-with-mega-rocket’s-11th-test-flight

Starship’s elementary era ends today with mega-rocket’s 11th test flight

Future flights of Starship will end with returns to Starbase, where the launch tower will try to catch the vehicle coming home from space, similar to the way SpaceX has shown it can recover the Super Heavy booster. A catch attempt with Starship is still at least a couple of flights away.

In preparation for future returns to Starbase, the ship on Flight 11 will perform a “dynamic banking maneuver” and test subsonic guidance algorithms prior to its final engine burn to brake for splashdown. If all goes according to plan, the flight will end with a controlled water landing in the Indian Ocean approximately 66 minutes after liftoff.

Turning point

Monday’s test flight will be the last Starship launch of the year as SpaceX readies a new generation of the rocket, called Version 3, for its debut sometime in early 2026. The new version of the rocket will fly with upgraded Raptor engines and larger propellant tanks and have the capability for refueling in low-Earth orbit.

Starship Version 3 will also inaugurate SpaceX’s second launch pad at Starbase, which has several improvements over the existing site, including a flame trench to redirect engine exhaust away from the pad. The flame trench is a common feature of many launch pads, but all of the Starship flights so far have used an elevated launch mount, or stool, over a water-cooled flame deflector.

The current launch complex is expected to be modified to accommodate future Starship V3s, giving the company two pads to support a higher flight rate.

NASA is counting on a higher flight rate for Starship next year to move closer to fulfilling SpaceX’s contract to provide a human-rated lander to the agency’s Artemis lunar program. SpaceX has contracts worth more than $4 billion to develop a derivative of Starship to land NASA astronauts on the Moon.

But much of SpaceX’s progress toward a lunar landing hinges on launching numerous Starships—perhaps a dozen or more—in a matter of a few weeks or months. SpaceX is activating the second launch pad in Texas and building several launch towers and a new factory in Florida to make this possible.

Apart from recovering and reusing Starship itself, the program’s most pressing near-term hurdle is the demonstration of in-orbit refueling, a prerequisite for any future Starship voyages to the Moon or Mars. This first refueling test could happen next year but will require Starship V3 to have a smoother introduction than Starship V2, which is retiring after Flight 11 with, at best, a 40 percent success rate.

Starship’s elementary era ends today with mega-rocket’s 11th test flight Read More »