Science

pentagon-contract-figures-show-ula’s-vulcan-rocket-is-getting-more-expensive

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive

A SpaceX Falcon Heavy rocket with NASA’s Psyche spacecraft launches from NASA’s Kennedy Space Center in Florida on October 13, 2023. Credit: Chandan Khanna/AFP via Getty Images

The launch orders announced Friday comprise the second batch of NSSL Phase 3 missions the Space Force has awarded to SpaceX and ULA.

It’s important to remember that these prices aren’t what ULA or SpaceX would charge a commercial satellite customer. The US government pays a premium for access to space. The Space Force, the National Reconnaissance Office, and NASA don’t insure their launches like a commercial customer would do. Instead, government agencies have more insight into their launch contractors, including inspections, flight data reviews, risk assessments, and security checks. Government missions also typically get priority on ULA and SpaceX’s launch schedules. All of this adds up to more money.

A heavy burden

Four of the five launches awarded to SpaceX Friday will use the company’s larger Falcon Heavy rocket, according to Lt. Col. Kristina Stewart at Space Systems Command. One will fly on SpaceX’s workhorse Falcon 9. This is the first time a majority of the Space Force’s annual launch orders has required the lift capability of a Falcon Heavy, with three Falcon 9 booster cores combining to heave larger payloads into space.

All versions of ULA’s Vulcan rocket use a single core booster, with varying numbers of strap-on solid-fueled rocket motors to provide extra thrust off the launch pad.

Here’s a breakdown of the seven new missions assigned to SpaceX and ULA:

USSF-149: Classified payload on a SpaceX Falcon 9 from Florida

USSF-63: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-155: Classified payload SpaceX Falcon Heavy from Florida

USSF-205: WGS-12 communications satellite on a SpaceX Falcon Heavy from Florida

NROL-86: Classified payload on a SpaceX Falcon Heavy from Florida

USSF-88: GPS IIIF-4 navigation satellite on a ULA Vulcan VC2S (two solid rocket boosters) from Florida

NROL-88: Classified payload on a ULA Vulcan VC4S (four solid rocket boosters) from Florida

Pentagon contract figures show ULA’s Vulcan rocket is getting more expensive Read More »

how-different-mushrooms-learned-the-same-psychedelic trick

How different mushrooms learned the same psychedelic trick

Magic mushrooms have been used in traditional ceremonies and for recreational purposes for thousands of years. However, a new study has found that mushrooms evolved the ability to make the same psychoactive substance twice. The discovery has important implications for both our understanding of these mushrooms’ role in nature and their medical potential.

Magic mushrooms produce psilocybin, which your body converts into its active form, psilocin, when you ingest it. Psilocybin rose in popularity in the 1960s and was eventually classed as a Schedule 1 drug in the US in 1970, and as a Class A drug in 1971 in the UK, the designations given to drugs that have high potential for abuse and no accepted medical use. This put a stop to research on the medical use of psilocybin for decades.

But recent clinical trials have shown that psilocybin can reduce depression severity, suicidal thoughts, and chronic anxiety. Given its potential for medical treatments, there is renewed interest in understanding how psilocybin is made in nature and how we can produce it sustainably.

The new study, led by pharmaceutical microbiology researcher Dirk Hoffmeister, from Friedrich Schiller University Jena, discovered that mushrooms can make psilocybin in two different ways, using different types of enzymes. This also helped the researchers discover a new way to make psilocybin in a lab.

Based on the work led by Hoffmeister, enzymes from two types of unrelated mushrooms under study appear to have evolved independently from each other and take different routes to create the exact same compound.

This is a process known as convergent evolution, which means that unrelated living organisms evolve two distinct ways to produce the same trait. One example is that of caffeine, where different plants including coffee, tea, cacao, and guaraná have independently evolved the ability to produce the stimulant.

This is the first time that convergent evolution has been observed in two organisms that belong to the fungal kingdom. Interestingly, the two mushrooms in question have very different lifestyles. Inocybe corydalina, also known as the greenflush fibrecap and the object of Hoffmeister’s study, grows in association with the roots of different kinds of trees. Psilocybe mushrooms, on the other hand, traditionally known as magic mushrooms, live on nutrients that they acquire by decomposing dead organic matter, such as decaying wood, grass, roots, or dung.

How different mushrooms learned the same psychedelic trick Read More »

a-biological-0-day?-threat-screening-tools-may-miss-ai-designed-proteins.

A biological 0-day? Threat-screening tools may miss AI-designed proteins.


Ordering DNA for AI-designed toxins doesn’t always raise red flags.

Designing variations of the complex, three-dimensional structures of proteins has been made a lot easier by AI tools. Credit: Historical / Contributor

On Thursday, a team of researchers led by Microsoft announced that they had discovered, and possibly patched, what they’re terming a biological zero-day—an unrecognized security hole in a system that protects us from biological threats. The system at risk screens purchases of DNA sequences to determine when someone’s ordering DNA that encodes a toxin or dangerous virus. But, the researchers argue, it has become increasingly vulnerable to missing a new threat: AI-designed toxins.

How big of a threat is this? To understand, you have to know a bit more about both existing biosurveillance programs and the capabilities of AI-designed proteins.

Catching the bad ones

Biological threats come in a variety of forms. Some are pathogens, such as viruses and bacteria. Others are protein-based toxins, like the ricin that was sent to the White House in 2003. Still others are chemical toxins that are produced through enzymatic reactions, like the molecules associated with red tide. All of them get their start through the same fundamental biological process: DNA is transcribed into RNA, which is then used to make proteins.

For several decades now, starting the process has been as easy as ordering the needed DNA sequence online from any of a number of companies, which will synthesize a requested sequence and ship it out. Recognizing the potential threat here, governments and industry have worked together to add a screening step to every order: the DNA sequence is scanned for its ability to encode parts of proteins or viruses considered threats. Any positives are then flagged for human intervention to evaluate whether they or the people ordering them truly represent a danger.

Both the list of proteins and the sophistication of the scanning have been continually updated in response to research progress over the years. For example, initial screening was done based on similarity to target DNA sequences. But there are many DNA sequences that can encode the same protein, so the screening algorithms have been adjusted accordingly, recognizing all the DNA variants that pose an identical threat.

The new work can be thought of as an extension of that threat. Not only can multiple DNA sequences encode the same protein; multiple proteins can perform the same function. To form a toxin, for example, typically requires the protein to adopt the correct three-dimensional structure, which brings a handful of critical amino acids within the protein into close proximity. Outside of those critical amino acids, however, things can often be quite flexible. Some amino acids may not matter at all; other locations in the protein could work with any positively charged amino acid, or any hydrophobic one.

In the past, it could be extremely difficult (meaning time-consuming and expensive) to do the experiments that would tell you what sorts of changes a string of amino acids could tolerate while remaining functional. But the team behind the new analysis recognized that AI protein design tools have now gotten quite sophisticated and can predict when distantly related sequences can fold up into the same shape and catalyze the same reactions. The process is still error-prone, and you often have to test a dozen or more proposed proteins to get a working one, but it has produced some impressive successes.

So, the team developed a hypothesis to test: AI can take an existing toxin and design a protein with the same function that’s distantly related enough that the screening programs do not detect orders for the DNA that encodes it.

The zero-day treatment

The team started with a basic test: use AI tools to design variants of the toxin ricin, then test them against the software that is used to screen DNA orders. The results of the test suggested there was a risk of dangerous protein variants slipping past existing screening software, so the situation was treated like the equivalent of a zero-day vulnerability.

“Taking inspiration from established cybersecurity processes for addressing such situations, we contacted the relevant bodies regarding the potential vulnerability, including the International Gene Synthesis Consortium and trusted colleagues in the protein design community as well as leads in biosecurity at the US Office of Science and Technology Policy, US National Institute of Standards and Technologies, US Department of Homeland Security, and US Office of Pandemic Preparedness and Response,” the authors report. “Outside of those bodies, details were kept confidential until a more comprehensive study could be performed in pursuit of potential mitigations and for ‘patches’… to be developed and deployed.”

Details of that original test are being made available today as part of a much larger analysis that extends the approach to a large range of toxic proteins. Starting with 72 toxins, the researchers used three open source AI packages to generate a total of about 75,000 potential protein variants.

And this is where things get a little complicated. Many of the AI-designed protein variants are going to end up being non-functional, either subtly or catastrophically failing to fold up into the correct configuration to create an active toxin. The only way to know which ones work is to make the proteins and test them biologically; most AI protein design efforts will make actual proteins from dozens to hundreds of the most promising-looking potential designs to find a handful that are active. But doing that for 75,000 designs is completely unrealistic.

Instead, the researchers used two software-based tools to evaluate each of the 75,000 designs. One of these focuses on the similarity between the overall predicted physical structure of the proteins, and another looks at the predicted differences between the positions of individual amino acids. Either way, they’re a rough approximation of just how similar the proteins formed by two strings of amino acids should be. But they’re definitely not a clear indicator of whether those two proteins would be equally functional.

In any case, DNA sequences encoding all 75,000 designs were fed into the software that screens DNA orders for potential threats. One thing that was very clear is that there were huge variations in the ability of the four screening programs to flag these variant designs as threatening. Two of them seemed to do a pretty good job, one was mixed, and another let most of them through. Three of the software packages were updated in response to this performance, which significantly improved their ability to pick out variants.

There was also a clear trend in all four screening packages: The closer the variant was to the original structurally, the more likely the package (both before and after the patches) was to be able to flag it as a threat. In all cases, there was also a cluster of variant designs that were unlikely to fold into a similar structure, and these generally weren’t flagged as threats.

What does this mean?

Again, it’s important to emphasize that this evaluation is based on predicted structures; “unlikely” to fold into a similar structure to the original toxin doesn’t mean these proteins will be inactive as toxins. Functional proteins are probably going to be very rare among this group, but there may be a handful in there. That handful is also probably rare enough that you would have to order up and test far too many designs to find one that works, making this an impractical threat vector.

At the same time, there are also a handful of proteins that are very similar to the toxin structurally and not flagged by the software. For the three patched versions of the software, the ones that slip through the screening represent about 1 to 3 percent of the total in the “very similar” category. That’s not great, but it’s probably good enough that any group that tries to order up a toxin by this method would attract attention because they’d have to order over 50 just to have a good chance of finding one that slipped through, which would raise all sorts of red flags.

One other notable result is that the designs that weren’t flagged were mostly variants of just a handful of toxin proteins. So this is less of a general problem with the screening software and might be more of a small set of focused problems. Of note, one of the proteins that produced a lot of unflagged variants isn’t toxic itself; instead, it’s a co-factor necessary for the actual toxin to do its thing. As such, some of the screening software packages didn’t even flag the original protein as dangerous, much less any of its variants. (For these reasons, the company that makes one of the better-performing software packages decided the threat here wasn’t significant enough to merit a security patch.)

So, on its own, this work doesn’t seem to have identified something that’s a major threat at the moment. But it’s probably useful, in that it’s a good thing to get the people who engineer the screening software to start thinking about emerging threats.

That’s because, as the people behind this work note, AI protein design is still in its early stages, and we’re likely to see considerable improvements. And there’s likely to be a limit to the sorts of things we can screen for. We’re already at the point where AI protein design tools can be used to create proteins that have entirely novel functions and do so without starting with variants of existing proteins. In other words, we can design proteins that are impossible to screen for based on similarity to known threats, because they don’t look at all like anything we know is dangerous.

Protein-based toxins would be very difficult to design, because they have to both cross the cell membrane and then do something dangerous once inside. While AI tools are probably unable to design something that sophisticated at the moment, I would be hesitant to rule out the prospects of them eventually reaching that sort of sophistication.

Science, 2025. DOI: 10.1126/science.adu8578  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

A biological 0-day? Threat-screening tools may miss AI-designed proteins. Read More »

scientists-revive-old-bulgarian-recipe-to-make-yogurt-with-ants

Scientists revive old Bulgarian recipe to make yogurt with ants

Fermenting milk to make yogurt, cheeses, or kefir is an ancient practice, and different cultures have their own traditional methods, often preserved in oral histories. The forests of Bulgaria and Turkey have an abundance of red wood ants, for instance, so a time-honored Bulgarian yogurt-making practice involves dropping a few live ants (or crushed-up ant eggs) into the milk to jump-start fermentation. Scientists have now figured out why the ants are so effective in making edible yogurt, according to a paper published in the journal iScience. The authors even collaborated with chefs to create modern recipes using ant yogurt.

“Today’s yogurts are typically made with just two bacterial strains,” said co-author Leonie Jahn from the Technical University of Denmark. “If you look at traditional yogurt, you have much bigger biodiversity, varying based on location, households, and season. That brings more flavors, textures, and personality.”

If you want to study traditional culinary methods, it helps to go where those traditions emerged, since the locals likely still retain memories and oral histories of said culinary methods—in this case, Nova Mahala, Bulgaria, where co-author Sevgi Mutlu Sirakova’s family still lives. To recreate the region’s ant yogurt, the team followed instructions from Sirakova’s uncle. They used fresh raw cow milk, warmed until scalding, “such that it could ‘bite your pinkie finger,'” per the authors. Four live red wood ants were then collected from a local colony and added to the milk.

The authors secured the milk with cheesecloth and wrapped the glass container in fabric for insulation before burying it inside the ant colony, covering the container completely with the mound material. “The nest itself is known to produce heat and thus act as an incubator for yogurt fermentation,” they wrote. They retrieved the container 26 hours later to taste it and check the pH, stirring it to observe the coagulation. The milk had definitely begun to thicken and sour, producing the early stage of yogurt. Tasters described it as “slightly tangy, herbaceous,” with notes of “grass-fed fat.”

Scientists revive old Bulgarian recipe to make yogurt with ants Read More »

blue-origin-aims-to-land-next-new-glenn-booster,-then-reuse-it-for-moon-mission

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission


“We fully intend to recover the New Glenn first stage on this next launch.”

New Glenn lifts off on its debut flight on January 16, 2025. Credit: Blue Origin

There’s a good bit riding on the second launch of Blue Origin’s New Glenn rocket.

Most directly, the fate of a NASA science mission to study Mars’ upper atmosphere hinges on a successful launch. The second flight of Blue Origin’s heavy-lifter will send two NASA-funded satellites toward the red planet to study the processes that drove Mars’ evolution from a warmer, wetter world to the cold, dry planet of today.

A successful launch would also nudge Blue Origin closer to winning certification from the Space Force to begin launching national security satellites.

But there’s more on the line. If Blue Origin plans to launch its first robotic Moon lander early next year—as currently envisioned—the company needs to recover the New Glenn rocket’s first stage booster. Crews will again dispatch Blue Origin’s landing platform into the Atlantic Ocean, just as they did for the first New Glenn flight in January.

The debut launch of New Glenn successfully reached orbit, a difficult feat for the inaugural flight of any rocket. But the booster fell into the Atlantic Ocean after three of the rocket’s engines failed to reignite to slow down for landing. Engineers identified seven changes to resolve the problem, focusing on what Blue Origin calls “propellant management and engine bleed control improvements.”

Relying on reuse

Pat Remias, Blue Origin’s vice president of space systems development, said Thursday that the company is confident in nailing the landing on the second flight of New Glenn. That launch, with NASA’s next set of Mars probes, is likely to occur no earlier than November from Cape Canaveral Space Force Station, Florida.

“We fully intend to recover the New Glenn first stage on this next launch,” Remias said in a presentation at the International Astronautical Congress in Sydney. “Fully intend to do it.”

Blue Origin, owned by billionaire Jeff Bezos, nicknamed the booster stage for the next flight “Never Tell Me The Odds.” It’s not quite fair to say the company’s leadership has gone all-in with their bet that the next launch will result in a successful booster landing. But the difference between a smooth touchdown and another crash landing will have a significant effect on Bezos’ Moon program.

That’s because the third New Glenn launch, penciled in for no earlier than January of next year, will reuse the same booster flown on the upcoming second flight. The payload on that launch will be Blue Origin’s first Blue Moon lander, aiming to become the largest spacecraft to reach the lunar surface. Ars has published a lengthy feature on the Blue Moon lander’s role in NASA’s effort to return astronauts to the Moon.

“We will use that first stage on the next New Glenn launch,” Remias said. “That is the intent. We’re pretty confident this time. We knew it was going to be a long shot [to land the booster] on the first launch.”

A long shot, indeed. It took SpaceX 20 launches of its Falcon 9 rocket over five years before pulling off the first landing of a booster. It was another 15 months before SpaceX launched a previously flown Falcon 9 booster for the first time.

With New Glenn, Blue’s engineers hope to drastically shorten the learning curve. Going into the second launch, the company’s managers anticipate refurbishing the first recovered New Glenn booster to launch again within 90 days. That would be a remarkable accomplishment.

Dave Limp, Blue Origin’s CEO, wrote earlier this year on social media that recovering the booster on the second New Glenn flight will “take a little bit of luck and a lot of excellent execution.”

On September 26, Blue Origin shared this photo of the second New Glenn booster on social media.

Blue Origin’s production of second stages for the New Glenn rocket has far outpaced manufacturing of booster stages. The second stage for the second flight was test-fired in April, and Blue completed a similar static-fire test for the third second stage in August. Meanwhile, according to a social media post written by Limp last week, the body of the second New Glenn booster is assembled, and installation of its seven BE-4 engines is “well underway” at the company’s rocket factory in Florida.

The lagging production of New Glenn boosters, known as GS1s (Glenn Stage 1s), is partly by design. Blue Origin’s strategy with New Glenn has been to build a small number of GS1s, each of which is more expensive and labor-intensive than SpaceX’s Falcon 9. This approach counts on routine recoveries and rapid refurbishment of boosters between missions.

However, this strategy comes with risks, as it puts the booster landings in the critical path for ramping up New Glenn’s launch rate. At one time, Blue aimed to launch eight New Glenn flights this year; it will probably end the year with two.

Laura Maginnis, Blue Origin’s vice president of New Glenn mission management, said last month that the company was building a fleet of “several boosters” and had eight upper stages in storage. That would bode well for a quick ramp-up in launch cadence next year.

However, Blue’s engineers haven’t had a chance to inspect or test a recovered New Glenn booster. Even if the next launch concludes with a successful landing, the rocket could come back to Earth with some surprises. SpaceX’s initial development of Falcon 9 and Starship was richer in hardware, with many boosters in production to decouple successful landings from forward progress.

Blue Moon

All of this means a lot is riding on an on-target landing of the New Glenn booster on the next flight. Separate from Blue Origin’s ambitions to fly many more New Glenn rockets next year, a good recovery would also mean an earlier demonstration of the company’s first lunar lander.

The lander set to launch on the third New Glenn mission is known as Blue Moon Mark 1, an unpiloted vehicle designed to robotically deliver up to 3 metric tons (about 6,600 pounds) of cargo to the lunar surface. The spacecraft will have a height of about 26 feet (8 meters), taller than the lunar lander used for NASA’s Apollo astronaut missions.

The first Blue Moon Mark 1 is funded from Blue Origin’s coffers. It is now fully assembled and will soon ship to NASA’s Johnson Space Center in Houston for vacuum chamber testing. Then, it will travel to Florida’s Space Coast for final launch preparations.

“We are building a series, not a singular lander, but multiple types and sizes and scales of landers to go to the Moon,” Remias said.

The second Mark 1 lander will carry NASA’s VIPER rover to prospect for water ice at the Moon’s south pole in late 2027. Around the same time, Blue will use a Mark 1 lander to deploy two small satellites to orbit the Moon, flying as low as a few miles above the surface to scout for resources like water, precious metals, rare Earth elements, and helium-3 that could be extracted and exploited by future explorers.

A larger lander, Blue Moon Mark 2, is in an earlier stage of development. It will be human-rated to land astronauts on the Moon for NASA’s Artemis program.

Blue Origin’s Blue Moon MK1 lander, seen in the center, is taller than NASA’s Apollo lunar lander, currently the largest spacecraft to have landed on the Moon. Blue Moon MK2 is even larger, but all three landers are dwarfed in size by SpaceX’s Starship. Credit: Blue Origin

NASA’s other crew-rated lander will be derived from SpaceX’s Starship rocket. But Starship and Blue Moon Mark 2 are years away from being ready to accommodate a human crew, and both require orbital cryogenic refueling—something never before attempted in space—to transit out to the Moon.

This has led to a bit of a dilemma at NASA. China is also working on a lunar program, eyeing a crew landing on the Moon by 2030. Many experts say that, as of today, China is on pace to land astronauts on the Moon before the United States.

Of course, 12 US astronauts walked on the Moon in the Apollo program. But no one has gone back since 1972, and NASA and China are each planning to return to the Moon to stay.

One way to speed up a US landing on the Moon might be to use a modified version of Blue Origin’s Mark 1 lander, Ars reported Thursday.

If this is the path NASA takes, the stakes for the next New Glenn launch and landing will soar even higher.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Blue Origin aims to land next New Glenn booster, then reuse it for Moon mission Read More »

trump-offers-universities-a-choice:-comply-for-preferential-funding

Trump offers universities a choice: Comply for preferential funding

On Wednesday, The Wall Street Journal reported that the Trump administration had offered nine schools a deal: manage your universities in a way that aligns with administration priorities and get “substantial and meaningful federal grants,” along with other benefits. Failure to accept the bargain would result in a withdrawal of federal programs that would likely cripple most universities. The offer, sent to a mixture of state and private universities, would see the government dictate everything from hiring and admissions standards to grading and has provisions that appear intended to make conservative ideas more welcome on campus.

The document was sent to the University of Arizona, Brown University, Dartmouth College, Massachusetts Institute of Technology, the University of Pennsylvania, the University of Southern California, the University of Texas, Vanderbilt University, and the University of Virginia. However, independent reporting indicates that the administration will ultimately extend the deal to all colleges and universities.

Ars has obtained a copy of the proposed “Compact for Academic Excellence in Higher Education,” which makes the scope of the bargain clear in its introduction. “Institutions of higher education are free to develop models and values other than those below, if the institution elects to forego federal benefits,” it suggests, while mentioning that those benefits include access to fundamental needs, like student loans, federal contracts, research funding, tax benefits, and immigration visas for students and faculty.

It is difficult to imagine how it would be possible to run a major university without access to those programs, making this less a compact and more of an ultimatum.

Poorly thought through

The Compact itself would see universities agree to cede admissions standards to the federal government. The government, in this case, is demanding only the use of “objective” criteria such as GPA and standardized test scores as the basis of admissions decisions, and that schools publish those criteria on their websites. They would also have to publish anonymized data comparing how admitted and rejected students did relative to these criteria.

Trump offers universities a choice: Comply for preferential funding Read More »

world-famous-primatologist-jane-goodall-dead-at-91

World-famous primatologist Jane Goodall dead at 91

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago

A sculpture of Jane Goodall and David Greybeard outside the Field Museum of Natural History in Chicago Credit: Geary/CC0

David Greybeard’s behavior also challenged the long-held assumption that chimpanzees were vegetarians. Goodall found that chimps would hunt and eat smaller primates like colobus monkeys as well, sometimes sharing the carcass with other troop members. She also recorded evidence of strong bonds between mothers and infants, altruism, compassion, and aggression and violence. For instance, dominant females would sometimes kill the infants of rival females, and from 1974 to 1978, there was a violent conflict between two communities of chimpanzees that became known as the Gombe Chimpanzee War.

Almost human

One of the more colorful chimps Goodall studied was named Frodo, who grew up to be an alpha male with a temperament very unlike his literary namesake. “As an infant, Frodo proved mischievous, disrupting Jane Goodall’s efforts to record data on mother-infant relationships by grabbing at her notebooks and binoculars,” anthropologist Michael Wilson of the University of Minnesota in Saint Paul recalled on his blog when Frodo died from renal failure in 2013. “As he grew older, Frodo developed a habit of throwing rocks, charging at, hitting, and knocking over human researchers and tourists.” Frodo attacked Wilson twice on Wilson’s first trip to Gombe, even beating Goodall herself in 1989, although he eventually lost his alpha status and “mellowed considerably” in his later years, per Wilson.

Goodall became so renowned around the world that she even featured in one of Gary Larson’s Far Side cartoons, in which two chimps are shown grooming when one finds a blonde hair on the other. “Conducting a little more ‘research’ with that Jane Goodall tramp?” the caption read. The JGI was not amused, sending Larson a letter (without Goodall’s knowledge) calling the cartoon an “atrocity,” but their objections were not shared by Goodall herself, who thought the cartoon was very funny when she heard of it. Goodall even wrote a preface to The Far Side Gallery 5. Larson, for his part, visited Goodall’s research facility in Tanzania in 1988, where he experienced Frodo’s alpha aggressiveness firsthand.

A young Jane Goodall in the field.

A young Jane Goodall in the field. Credit: YouTube/Jane Goodall Institute

Goodall founded the JGI in 1977 and authored more than 27 books, most notably My Friends, the Wild Chimpanzees (1967), In the Shadow of Man (1971), and Through a Window (1990). There was some initial controversy around her 2014 book Seeds of Hope, co-written with Gail Hudson, when portions were found to have been plagiarized from online sources; the publisher postponed publication so that Goodall could revise the book and add 57 pages of endnotes. (She blamed her “chaotic note-taking” for the issue.) National Geographic released a full-length documentary last year about her life’s work, drawing from over 100 hours of previously unseen archival footage.

World-famous primatologist Jane Goodall dead at 91 Read More »

megafauna-was-the-meat-of-choice-for-south-american-hunters

Megafauna was the meat of choice for South American hunters

And that makes perfect sense, because when you reduce hunters’ choices to simple math using what’s called the prey choice model (more on that below), these long-lost species offered bigger returns for the effort of hunting. In other words, giant sloths are extinct because they were delicious and made of meat.

Yup, it’s humanity’s fault—again

As the last Ice Age drew to a close, the large animals that had once dominated the world’s chilly Pleistocene landscapes started to vanish. Mammoths, saber-toothed tigers, and giant armadillos died out altogether. Other species went locally extinct; rhinoceroses no longer stomped around southern Europe, and horses disappeared from the Americas until European colonists brought new species with them thousands of years later.

Scientists have been arguing about how much of that was humanity’s fault for quite a while.

Most of the blame goes to the world’s changing climate; habitats shifted as the world mostly got warmer and wetter. But, at least in some places, humans may have sped the process along, either by hunting the last of the Pleistocene megafauna to extinction or just by shaking up the rest of the ecosystem so much that it was all too ready to collapse, taking the biggest species down with it.

It looks, at first glance, like South America’s late Ice Age hunters are safely not guilty. For one thing, the megafauna didn’t start dying out until thousands of years after humans first set foot in the region. Archaeologists also haven’t found many sites that contain both traces of human activity and the bones of extinct horses, giant armadillos, or other megafauna. And at those few sites, megafauna bones made up only a small percentage of the contents of ancient scrap piles. Not enough evidence places us at the crime scene, in other words—or so it seems.

On the other hand, the Ice Age megafauna began dying out in South America around 13,000 years ago, roughly the same time that a type of projectile point called the fishtail appeared. That may not be a coincidence, argued one study. And late last year, another study showed that farther north, in what’s now the United States, Clovis people’s diets contained mammoth amounts of… well, mammoth.

Megafauna was the meat of choice for South American hunters Read More »

is-the-“million-year-old”-skull-from-china-a-denisovan-or-something-else?

Is the “million-year-old” skull from China a Denisovan or something else?


Homo longi by any other name

Now that we know what Denisovans looked like, they’re turning up everywhere.

This digital reconstruction makes Yunxian 2 look liess like a Homo erectus and more like a Denisovan (or Homo longi, according to the authors). Credit: Feng et al. 2025

A fossil skull from China that made headlines last week may or may not be a million years old, but it’s probably closely related to Denisovans.

The fossil skull, dubbed Yunxian 2, is one of three unearthed from a terrace alongside the Han River, in central China, in a layer of river sediment somewhere between 600,000 and 1 million years old. Archaeologists originally identified them as Homo erectus, but Hanjiang Normal University paleoanthropologist Xiaobo Feng and his colleagues’ recent digital reconstruction of Yunxian 2 suggests the skulls may actually have belonged to someone a lot more similar to us: a hominin group defined as a species called Homo longi or a Denisovan, depending on who’s doing the naming.

The recent paper adds fuel—and a new twist—to that debate. And the whole thing may hinge on a third skull from the same site, still waiting to be published.

A front and a side view of a digitally reconstructed hominin skull

This digital reconstruction makes Yunxian 2 look less like a Homo erectus and more like a Denisovan (or Homo longi, according to the authors). Credit: Feng et al. 2025

Denisovan or Homo longi?

The Yunxian skull was cracked and broken after hundreds of thousands of years under the crushing weight of all that river mud, but the authors used CT scans to digitally put the pieces back together. (They got some clues from a few intact bits of Yunxian 1, which lay buried in the same layer of mud just 3 meters away.) In the end, Feng and his colleagues found themselves looking at a familiar face; Yunxian 2 bears a striking resemblance to a 146,000-year-old Denisovan skull.

That skull, from Harbin in northeast China, made headlines in 2021 when a team of paleoanthropologists claimed it was part of an entirely new species, which they dubbed Homo longi. According to that first study, Homo longi was a distinct hominin species, separate from us, Neanderthals, and even Denisovans. That immediately became a point of contention because of features the skull shared with some other suspected Denisovan fossils.

Earlier this year, a team of researchers, which included one of the 2021 study’s authors, took samples of ancient proteins preserved in the Harbin skull; of the 95 proteins they found, three of them matched proteins only encoded in Denisovan DNA. While the June 2025 study suggested that Homo longi was a Denisovan all along, the new paper draws a different conclusion: Homo longi is a species that happens to include the population we’ve been calling Denisovans. As study coauthor Xijun Ni, of the Chinese Academy of Sciences, puts it in an email to Ars Technica, “Given their similar age range, distribution areas, and available morphological data, it is likely that Denisovans belong to the Homo longi species. However, little is known about Denisovan morphology.”

Of course, that statement—that we know little about Denisovan morphology (the shapes and features of their bones)—only applies if you don’t accept the results of the June 2025 study mentioned above, which clocked the Harbin skull as a Denisovan and therefore told us what one looks like.

And Feng and his colleagues, in fact, don’t accept those results. Instead, they consider Harbin part of some other group of Homo longi, and they question the earlier study’s methods and results. “The peptide sequences from Harbin, Penghu, and other fossils are too short and provide conflicting information,” Ni tells Ars Technica. Feng and his colleagues also question the results of another study, which used mitochondrial DNA to identify Harbin as a Denisovan.

In other words, Feng and his colleagues are pretty invested in defining Homo longi as a species and Denisovans as just one sub-group of that species. But that’s hard to square with DNA data.

Alas, poor Yunxian 2, I knew him well

Yunxian 2 has a wide face with high, flat cheekbones, a wide nasal opening, and heavy brows. Its cranium is higher and rounder than Homo erectus (and the original reconstruction, done in the 1990s), but it’s still longer and lower than is normal for our species. Overall, it could have held about 1,143 cubic centimeters of brain, which is in the ballpark of modern people. But its shape may have left less room for the frontal lobe (the area where a lot of social skills, logic, motor skills, and executive function happen) than you’d expect in a Neanderthal or a Homo sapiens skull.

Feng and his colleagues measured the distances between 533 specific points on the skull: anatomical landmarks like muscle attachment points or the joints between certain bones. They compared those measurements to ones from 26 fossil hominin skulls and several-dozen modern human skulls, using a computer program to calculate how similar each skull was to all of the others.

Yunxian 2 fits neatly into a lookalike group with the Harbin skull, along with two other skulls that paleoanthropologists have flagged as belonging to either Denisovans or Homo longi. Those two skulls are a 200,000- to 260,000-year-old skull found in Dali County in northwestern China and a 260,000-year-old skull from Jinniushi (sometimes spelled Jinniushan) Cave in China.

Those morphological differences suggest some things about how the individuals who once inhabited these skulls might have been related to each other, but that’s also where things get dicey.

front and side views of 3 skulls.

An older reconstruction of the Yunxian 2 skull gives it a flatter look. Credit: government of Wuhan

Digging into the details

Most of what we know about how we’re related to our closest extinct hominin relatives (Neanderthals and Denisovans) comes from comparing our DNA to theirs and tracking how small changes in the genetic code build up over time. Based on DNA, our species last shared a common ancestor with Neanderthals and Denisovans sometime around 750,000 years ago in Africa. One branch of the family tree led to us; the other branch split again around 600,000 years ago, leading to Neanderthals and Denisovans (or Homo longi, if you prefer).

In other words, DNA tells us that Neanderthals and Denisovans are more closely related to each other than either is to us. (Unless you’re looking at mitochondrial DNA, which suggests that we’re more closely related to Neanderthals than to Denisovans; it’s complicated, and there’s a lot we still don’t understand.)

“Ancient mtDNA and genomic data show different phylogenetic relationships among Denisovans, Neanderthals and Homo sapiens,” says Ni. So depending on which set of data you use and where your hominin tree starts, it can be possible to get different answers about who is most closely related to whom. The fact that all of these groups interbred with each other can explain this complexity, but makes building family trees challenging.

It is very clear, however, that Feng and his colleagues’ picture of the relationships between us and our late hominin cousins, based on similarities among fossil skulls in their study, looks very different from what the genomes tell us. In their model, we’re more closely related to Denisovans, and the Neanderthals are off on their own branch of the family tree. Feng and his colleagues also say those splits happened much earlier, with Neanderthals branching off on their own around 1.38 million years ago; we last shared a common ancestor with Homo longi around 1 million years ago.

That’s a big difference from DNA results, especially when it comes to timing. And the timing is likely to be the biggest controversy here. In a recent commentary on Feng and his colleagues’ study, University of Wisconsin paleoanthropologist John Hawks argues that you can’t just leave genetic evidence out of the picture.

“What this research should have done is to put the anatomical comparisons into context with the previous results from DNA, especially the genomes that enable us to understand the relationships of Denisovan, Neanderthal, and modern human groups,” Hawks writes.

(It’s worth a side note that most news stories describe Yunxian 2 as being a million years old, and so do Feng and his colleagues. But electron spin resonance dating of fossil animal bones from the same sediment layer suggests the skull could be as young as 600,000 years old or as old as 1.1 million. That still needs to be narrowed down to everyone’s satisfaction.)

What’s in a name?

Of course, DNA also tells us that even after all this branching and migrating, the three species were still similar enough to reproduce, which they did several times. Many groups of modern people still carry traces of Neanderthal and Denisovan DNA in their genomes, courtesy of those exchanges. And some ancient Neanderthal populations were carrying around even older chunks of human DNA in the same way. That arguably makes species definitions a little fuzzy at best—and maybe even irrelevant.

“I think all these groups, including Neanderthals, should be recognized within our own species, Homo sapiens,” writes Hawks. Hawks contends that the differences among these hominin groups “were the kind that evolve among the populations of a single species over time, not starkly different groups that tread the landscape in mutually unrecognizeable ways.”

But humans love to classify things (a trait we may have shared with Neanderthals and Denisovans), so those species distinctions are likely to persist even if the lines between them aren’t so solid. As long as that’s the case, names and classifications will be fodder for often heated debate. And Feng’s team is staking out a position that’s very different from Hawks’. “‘Denisovan’ is a label for genetic samples taken from the Denisova Cave. It should not be used everywhere. Homo longi is a formally named species,” says Ni.

Technically, Denisovans don’t have a formal species name, a Latinized moniker like Homo erectus that comes with a clear(ish) spot on the family tree. Homo longi would be a more formal species name, but only if scientists can agree on whether they’re actually a species.

an archaeologist kneels in front of a partially buried skull

An archaeologist comes face to face with the Yunxian 3 skull Credit: government of Wuhan

The third Yunxian skull

Paleoanthropologists unearthed a third skull from the Yunxian site in 2022. It bears a strong resemblance to the other two from the area (and is apparently in better shape than either of them), and it dates to about the same timeframe. A 2022 press release describes it as “the most complete Homo erectus skull found in Eurasia so far,” but if Feng and his colleagues are right, it may actually be a remarkably complete Homo longi (and/or Denisovan) skull. And it could hold the answers to many of the questions anthropologists like Feng and Hawks are currently debating.

“It remains pretty obvious that Yunxian 3 is going to be central to testing the relationships of this sample [of fossil hominins in Feng and colleagues’ paper],” writes Hawks.

The problem is that Yunxian 3 is still being cleaned and prepared. Preparing a fossil is a painstaking, time-consuming process that involves very carefully excavating it from the rocky matrix it’s embedded in, using everything from air-chisels to paintbrushes. And until that’s done and a scientific report on the skull is published, other paleoanthropologists don’t have access to any information about its features—which would be super useful for figuring out how to define whatever group we eventually decide it belongs to.

For the foreseeable future, the relationships between us and our extinct cousins (or at least our ideas about those relationships) will keep changing as we get more data. Eventually, we may have enough data from enough fossils and ancient DNA samples to form a clearer picture of our past. But in the meantime, if you’re drawing a hominin family tree, use a pencil.

Science, 2025.  DOI: 10.1126/science.ado9202  (About DOIs).

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Is the “million-year-old” skull from China a Denisovan or something else? Read More »

scientists-unlock-secret-to-venus-flytrap’s-hair-trigger-response

Scientists unlock secret to Venus flytrap’s hair-trigger response

To trap its prey, the Venus flytrap sends rapid electrical impulses, which are generated in response to touch or stress. But the molecular identity of the touch sensor has remained unclear. Japanese scientists have identified the molecular mechanism that triggers that response and have published their work in a new paper in the journal Nature Communications.

As previously reported, the Venus flytrap attracts its prey with a pleasing fruity scent. When an insect lands on a leaf, it stimulates the highly sensitive trigger hairs that line the leaf. When the pressure becomes strong enough to bend those hairs, the plant will snap its leaves shut and trap the insect inside. Long cilia grab and hold the insect in place, much like fingers, as the plant begins to secrete digestive juices. The insect is digested slowly over five to 12 days, after which the trap reopens, releasing the dried-out husk of the insect into the wind.

In 2016, Rainer Hedrich, a biophysicist at Julius-Maximilians-Universität Würzburg in Bavaria, Germany, led the team that discovered that the Venus flytrap could actually “count” the number of times something touches its hair-lined leaves—an ability that helps the plant distinguish between the presence of prey and a small nut or stone, or even a dead insect. The plant detects the first “action potential” but doesn’t snap shut right away, waiting until a second zap confirms the presence of actual prey, at which point the trap closes. But the Venus flytrap doesn’t close all the way and produce digestive enzymes to consume the prey until the hairs are triggered three more times (for a total of five stimuli).

And in 2023, scientists developed a bioelectronic device to better understand the Venus flytrap’s complex signaling mechanism by mapping how those signals propagate. They confirmed that the electrical signal starts in the plant’s sensory hairs and then spreads radially outward with no clear preferred direction. And sometimes the signals were spontaneous, originating in sensory hairs that had not been stimulated.

Glowing green

This latest research is an outgrowth of a 2020 paper detailing how the Japanese authors genetically altered a Venus flytrap to gain important clues about how the plant’s short-term “memory” works. They introduced a gene for a calcium sensor protein called GCaMP6, which glows green whenever it binds to calcium. That green fluorescence allowed the team to visually track the changes in calcium concentrations in response to stimulating the plant’s sensitive hairs with a needle. They concluded that the waxing and waning of calcium concentrations in the leaf cells seem to serve as a kind of short-term memory for the Venus flytrap, though precisely how calcium concentrations work with the plant’s electrical network remained unclear.

Scientists unlock secret to Venus flytrap’s hair-trigger response Read More »

150-million-year-old-pterosaur-cold-case-has-finally-been-solved

150 million-year-old pterosaur cold case has finally been solved

Smyth thinks that so few adults show up on the fossil record in this region not only because they were more likely to survive, but also because those that couldn’t were not buried as quickly. Carcasses would float on the water anywhere from days to weeks. As they decomposed, parts would fall to the lagoon bottom. Juveniles were small enough to be swept under and buried quickly by sediments that would preserve them.

Cause of death

The humerus fractures found in Lucky I and Lucky II were especially significant because forelimb injuries are the most common among existing flying vertebrates. The humerus attaches the wing to the body and bears most flight stress, which makes it more prone to trauma. Most humerus fractures happen in flight as opposed to being the result of a sudden impact with a tree or cliff. And these fractures were the only skeletal trauma seen in any of the juvenile pterosaur specimens from Solnhofen.

Evidence suggesting the injuries to the two fledgling pterosaurs happened before death includes the displacement of bones while they were still in flight (something recognizable from storm deaths of extant birds and bats) and the smooth edges of the break, which happens in life, as opposed to the jagged edges of postmortem breaks. There were also no visible signs of healing.

Storms disproportionately affected flying creatures at Solnhofen, which were often taken down by intense winds. Many of Solnhofen’s fossilized vertebrates were pterosaurs and other winged species such as bird ancestor Arachaeopteryx. Flying invertebrates were also doomed.

Even marine invertebrates and fish were threatened by storm conditions, which churned the lagoons and brought deep waters with higher salt levels and low oxygen to the surface. Anything that sank to the bottom was exceptionally preserved because of these same conditions, which were too harsh for scavengers and paused decomposition. Mud kicked up by the storms also helped with the fossilization process by quickly covering these organisms and providing further protection from the elements.

“The same storm events responsible for the burial of these individuals also transported the pterosaurs into the lagoonal basins and were likely the primary cause of their injury and death,” Smyth concluded.

Although Lucky I and Lucky II were decidedly unlucky, the exquisite preservation of their skeletons that shows how they died has finally allowed researchers to solve a case that went cold for over a hundred thousand years.

Current Biology, 2025. DOI: 10.1016/j.cub.2025.08.006

150 million-year-old pterosaur cold case has finally been solved Read More »

the-current-war-on-science,-and-who’s-behind-it

The current war on science, and who’s behind it


A vaccine developer and a climate scientist walk into a bar write a book.

Fighting against the anti-science misinformation can feel like fighting a climate-driven wildfire. Credit: Anadolu

We’re about a quarter of the way through the 21st century.

Summers across the global north are now defined by flash floods, droughts, heat waves, uncontainable wildfires, and intensifying named storms, exactly as predicted by Exxon scientists back in the 1970s. The United States secretary of health and human services advocates against using the most effective tool we have to fight the infectious diseases that have ravaged humanity for millennia. People are eagerly lapping up the misinformation spewed and disseminated by AI chatbots, which are only just getting started.

It is against this backdrop that a climate scientist and a vaccine developer teamed up to write Science Under Siege. It is about as grim as you’d expect.

Michael Mann is a climate scientist at the University of Pennsylvania who, in 1998, developed the notorious hockey stick graph, which demonstrated that global surface temperatures were roughly flat until around the year 1900, when they started rising precipitously (and have not stopped). Peter Hotez is a microbiologist and pediatrician at Baylor College of Medicine whose group developed a low-cost, patent-free COVID-19 vaccine using public funds (i.e., not from a pharmaceutical company) and distributed it to almost a hundred million people in India and Indonesia.

Unlikely crusaders

Neither of them anticipated becoming crusaders for their respective fields—and neither probably anticipated that their respective fields would ever actually need crusaders. But they each have taken on the challenge, and they’ve been rewarded for their trouble with condemnation and harassment from Congress and death threats from the public they are trying to serve. In this book, they hope to take what they’ve learned as scientists and science communicators in our current world and parlay that into a call to arms.

Mann and Hotez have more in common than being pilloried all over the internet. Although they trained in disparate disciplines, their fields are now converging (as if they weren’t each threatening enough on their own). Climate change is altering the habitats, migrations, and reproductive patterns of pathogen-bearing wildlife like bats, mosquitoes, and other insects. It is causing the migration of humans as well. Our increasing proximity to these species in both space and time can increase the opportunities for us to catch diseases from them.

Yet Mann and Hotez insist that a third scourge is even more dangerous than these two combined. In their words:

It is currently impossible for global leaders to take the urgent actions necessary to respond to the climate crisis and pandemic threats because they are thwarted by a common enemy—antiscience—that is politically and ideologically motivated opposition to any science that threatens powerful special interests and their political agendas. Unless we find a way to overcome antiscience, humankind will face its gravest threat yet—the collapse of civilization as we know it.

And they point to an obvious culprit: “There is, unquestionably, a coordinated, concerted attack on science by today’s Republican Party.”

They’ve helpfully characterized “the five principal forces of antiscience “ into alliterative groups: (1) plutocrats and their political action committees, (2) petrostates and their politicians and polluters, (3) fake and venal professionals—physicians and professors, (4) propagandists, especially those with podcasts, and (5) the press. The general tactic is that (1) and (2) hire (3) to generate deceitful and inflammatory talking points, which are then disseminated by all-too-willing members of (4) and (5).

There is obviously a lot of overlap among these categories; Elon Musk, Vladimir Putin, Rupert Murdoch, and Donald Trump can all jump between a number of these bins. As such, the ideas and arguments presented in the book are somewhat redundant, as are the words used. Far too many things are deemed “ironic” (i.e., the same people who deny and dismiss the notion of human-caused climate change claimed that Democrats generated hurricanes Helene and Milton to target red states in October 2024) or “risible” (see Robert F. Kennedy Jr.’s claim that Dr. Peter Hotez sought to make it a felony to criticize Anthony Fauci).

A long history

Antiscience propaganda has been used by authoritarians for over a century. Stalin imprisoned physicists and attacked geneticists while famously enacting the nonsensical agricultural ideas of Trofim Lysenko, who thought genes were a “bourgeois invention.” This led to the starvation of millions of people in the Soviet Union and China.

Why go after science? The scientific method is the best means we have of discovering how our Universe works, and it has been used to reveal otherwise unimaginable facets of reality. Scientists are generally thought of as authorities possessing high levels of knowledge, integrity, and impartiality. Discrediting science and scientists is thus an essential first step for authoritarian regimes to then discredit any other types of learning and truth and destabilize their societies.

The authors trace the antiscience messaging on COVID, which followed precisely the same arc as that on climate change except condensed into a matter of months instead of decades. The trajectory started by maintaining that the threat was not real. When that was no longer tenable, it quickly morphed into “OK, this is happening, and it may actually get pretty bad for some subset of people, but we should definitely not take collective action to address it because that would be bad for the economy.”

It finally culminated in preying upon people’s understandable fears in these very scary times by claiming that this is all the fault of scientists who are trying to take away your freedom, be that bodily autonomy and the ability to hang out with your loved ones (COVID) or your plastic straws, hamburgers, and SUVs (climate change).

This mis- and disinformation has prevented us from dealing with either catastrophe by misleading people about the seriousness, or even existence, of the threats and/or harping on their hopeless nature, sapping us of the will to do anything to counter them. These tactics also sow division among people, practically ensuring that we won’t band together to take the kind of collective action essential to addressing enormous, complex problems. It is all quite effective. Mann and Hotez conclude that “the future of humankind and the health of our planet now depend on surmounting the dark forces of antiscience.”

Why, you might wonder, would the plutocrats, polluters, and politicians of the Republican Party be so intent on undermining science and scientists, lying to the public, fearmongering, and stoking hatred among their constituents? The same reason as always: to hold onto their money and power. The means to that end is thwarting regulations. Yes, it’s nefarious, but also so disappointingly… banal.

The authors are definitely preaching exclusively to the converted. They are understandably angry at what has been done to them and somewhat mocking of those who don’t see things their way. They end by trying to galvanize their followers into taking action to reverse the current course.

They advise that the best—really, the only—thing we can do now to effect change is to vote and hope for favorable legislation. “Only political change, including massive turnout to support politicians who favor people over plutocrats, can ultimately solve this larger systemic problem,” they write. But since our president and vice president don’t even believe in or acknowledge “systemic problems,” the future is not looking too bright.

The current war on science, and who’s behind it Read More »