Science

large-genome-model:-open-source-ai-trained-on-trillions-of-bases

Large genome model: Open source AI trained on trillions of bases


System can identify genes, regulatory sequences, splice sites, and more.

Late in 2025, we covered the development of an AI system called Evo that was trained on massive numbers of bacterial genomes. So many that, when prompted with sequences from a cluster of related genes, it could correctly identify the next one or suggest a completely novel protein.

That system worked because bacteria tend to cluster related genes together—something that’s not true in organisms with complex cells, which tend to have equally complex genome structures. Given that, our coverage noted, “It’s not clear that this approach will work with more complex genomes.”

Apparently, the team behind Evo viewed that as a challenge, because today it is describing Evo 2, an open source AI that has been trained on genomes from all three domains of life (bacteria, archaea, and eukaryotes). After training on trillions of base pairs of DNA, Evo 2 developed internal representations of key features in even complex genomes like ours, including things like regulatory DNA and splice sites, which can be challenging for humans to spot.

Genome features

Bacterial genomes are organized along relatively straightforward principles. Any genes that encode proteins or RNAs are contiguous, with no interruptions in the coding sequence. Genes that perform related functions, like metabolizing a sugar or producing an amino acid, tend to be clustered together, allowing them to be controlled by a single, compact regulatory system. It’s all straightforward and efficient.

Eukaryotes are not like that. The coding sections of genes are interrupted by introns, which don’t encode for anything. They’re regulated by a sequence that can be scattered across hundreds of thousands of base pairs. The sequences that define the edges of introns or the binding sites of regulatory proteins are all weakly defined—while they have a few bases that are absolutely required, there are a lot of bases that just have an above-average tendency to have a specific base (something like “45 percent of the time it’s a T”). Surrounding all of this in most eukaryotic genomes is a huge amount of DNA that has been termed junk: inactive viruses, terminally damaged genes, and so on.

That complexity has made eukaryotic genomes more difficult to interpret. And, while a lot of specialized tools have been developed to identify things like splice sites, they’re all sufficiently error-prone that it becomes a problem when you’re analyzing something as large as a 3 billion-base-long genome. We can learn a lot more by making evolutionary comparisons and looking for sequences that have been conserved, but there are limits to that, and we’re often as interested in the differences between species.

These sorts of statistical probabilities, however, are well-suited to neural networks, which are great at recognizing subtle patterns that can be impossible to pick out by eye. But you’d need absolutely massive amounts of data and computing time to process it and pick out some of these subtle features.

We now have the raw genome data that the process needs. Putting together a system to feed it into an effective AI training program, however, remained a challenge. That’s the challenge the team behind Evo took on.

Training a large genome model

The foundation of the Evo 2 system is a convolutional neural network called StripedHyena 2. The training took place in two stages. The initial stage focused on teaching the system to identify important genome features by feeding it sequences rich in them in chunks about 8,000 bases long. After that, there was a second stage in which sequences were fed a million bases at a time to provide the system the opportunity to identify large-scale genome features.

The researchers trained two versions of their system using a dataset called OpenGenome2, which contains 8.8 trillion bases from all three domains of life, as well as viruses that infect bacteria. They did not include viruses that attack eukaryotes, given that they were concerned that the system could be misused to create threats to humans. Two versions were trained: one that had 7 billion parameters tuned using 2.4 trillion bases, and the full version with 40 billion parameters trained on the full open genome dataset.

The logic behind the training is pretty simple: if something’s important enough to have been evolutionarily conserved across a lot of species, it will show up in multiple contexts, and the system should see it repeatedly during training. “By learning the likelihood of sequences across vast evolutionary datasets, biological sequence models capture conserved sequence patterns that often reflect functional importance,” the researchers behind the work write. “These constraints allow the models to perform zero-shot prediction without any task-specific fine-tuning or supervision.”

That last aspect is important. We could, for example, tell it about what known splice sites look like, which might help it pick out additional ones. But that might make it harder for it to recognize any unusual splice sites that we haven’t recognized yet. Skipping the fine-tuning might also help it identify genome features that we’re not aware of at all at the moment, but which could become apparent through future research.

All of this has now been made available to the public. “We have made Evo 2 fully open, including model parameters, training code, inference code, and the OpenGenome2 dataset,” the paper announces.

The researchers also used a system that can identify internal features in neural networks to poke around inside of Evo 2 and figure out what things it had learned to recognize. They trained a separate neural network to recognize the firing patterns in Evo 2 and identify high-level features in it. It clearly recognized protein-coding regions and the boundaries of the introns that flanked them. It was also able to recognize some structural features of proteins within the coding regions (alpha helices and beta sheets), as well as mutations that disrupt their coding sequence. Even something like mobile genetic elements (which you can think of as DNA-level parasites) ended up with a feature within Evo 2.

What is this good for?

To test the system, the researchers started making single-base mutations and fed them into Evo 2 to see how it responded. Evo 2 could detect problems when the mutations affected the sites in DNA where transcription into RNA started, or the sites where translation of that RNA into protein started. It also recognized the severity of mutations. Those that would interrupt protein translation, such as the introduction of stop signals, were identified as more significant changes than those that left the translation intact.

It also recognized when sequences weren’t translated at all. Many key cellular functions are carried out directly by RNAs, and Evo 2 was able to recognize when mutations disrupted those, as well.

Impressively, the ability to recognize features in eukaryotic genomes occurred without the loss of its ability to recognize them in bacteria and archaea. In fact, the system seemed to be able to work out what species it was working in. A number of evolutionary groups use genetic codes with a different set of signals to stop the translation of proteins. Evo 2 was able to recognize when it was looking at a sequence from one of those species, and used the correct genetic code for them.

It was also good at recognizing features that tolerate a lot of variability, such as sites that signal where to splice RNAs to remove introns from the coding sequence of proteins. By some measures, it was better than software specialized for that task. The same was true when evaluating mutations in the BRCA2 gene, where many of the mutations are associated with cancer. Given additional training on known BRCA2 mutations, its performance improved further.

Overall, Evo 2 seems great for evaluating genomes and identifying key features. The researchers who built it suggest it could serve as a good automated tool for preliminary genome annotation.

But the striking thing about the early version of Evo was that, when prompted with a chunk of sequence that includes known bacterial genes, some of its responses included entirely new proteins with related functions. Now that it was trained on more complex eukaryotic genes, could it do the same?

We don’t entirely know. If given a bunch of DNA from yeast (a eukaryote), it would respond with a sequence that included functional RNAs, and gene-like sequences with regulatory information and splice sites. But the researchers didn’t test whether any of the proteins did anything in particular. And it’s difficult to see how they could even do that test. With bacterial genes, they could safely assume that the AI-generated gene should be doing something related to the nearby genes. But that’s generally not the case in eukaryotes, so it’s difficult to guess what functions they should even test for.

In a somewhat more informative test, the researchers asked Evo 2 to make some regulatory DNA that was active in one cell type and not another after giving it information about what sequences were active in both those cell types. The sequences that came out were then inserted into these cells and tested, but the results were pretty weak, with only 17 percent having activity that differed by a factor of two or more between the two cell types. That’s a major achievement, but it isn’t in the same realm as designing brand new proteins.

What’s next?

Overall, given that this has come out less than four months after the paper describing the original Evo, it’s not at all surprising that there wasn’t more work done to test what Evo 2 can do for designing biologically relevant DNA sequences. Biology experiments are hard and time-consuming, and it’s not always easy to judge in advance which ones will provide the most compelling information. So we’ll probably have to wait months to years to find out whether the community finds interesting things to do with Evo 2, and whether it’s good at solving any useful protein design problems.

There’s also the question of whether further training and specialization can create Evo 2 relatives that are especially good at specific tasks, such as evaluating genomes from cancer cells or annotating newly sequenced genomes. To an extent, it appears the research team wanted to get this out so that others could start exploring how it might be put to use; that’s consistent with the fact that all of the software was made available.

The big open question is whether this system has identified anything that we don’t know how to test for. Things like intron/exon boundaries and regulatory DNA have been subjected to decades of study so that we already knew how to look for them and can recognize when Evo 2 spots them. But we’ve discovered a steady stream of new features in the genome—CRISPR repeats, microRNAs, and more—over the past decades. It remains technically possible that there are features in the genome we’re not aware of yet, and Evo 2 has picked them out.

It’s possible to imagine ways to use the tools described here to query Evo 2 and pick out new genome features. So I’m looking forward to seeing what might ultimately come out of that sort of work.

Nature, 2026. DOI: 10.1038/s41586-026-10176-5 (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Large genome model: Open source AI trained on trillions of bases Read More »

re-creating-the-complex-cuisine-of-prehistoric-europeans

Re-creating the complex cuisine of prehistoric Europeans

The results: The team found traces of wild grasses and legumes, fruits or berries, green vegetables, and roots and tubers native to the broader region. Shards recovered from sites in the Don River basin showed these people used the seeds of wild legumes (possibly clover) and grasses, as well as showing some evidence of bran and barley. By contrast, shards from the Upper Volga and Dnieper-Dvina region contained more traces of guelder rose berries and other fleshy fruits and smaller-seeded Amaranthaceae plants.

Shards from the Baltic region showed higher traces of freshwater fish, with some regions also including berries, sea beetroot, flowering rush, beets, and sea club-rush tubers. There were also traces of dairy products in shards from a site in Denmark, likely obtained from nearby farming communities.

For the cooking experiments, the authors explored different potential food mixtures focusing on two main plant species: guelder rose berries and species related to the Amaranthaceae family (beet, goosefoot, and saltbush specifically). The berries were gathered in the fall from the south of England and frozen right afterward. They boiled the berries with water in replica pottery vessels, combining some batches with freshwater fish like carp, and also varying the distance of the vessels from the open flames and active embers. They then sampled the cooking residues and compared those results to the samples taken from the prehistoric vessels.

“Our results show that there was a general tendency towards combining specific foods into distinct preparations and in particular regions,” the authors concluded, such as combining Viburnum berries with freshwater fish in the Upper Volga and Baltic regions. Fish accompanied by wild grasses and legumes were preferred in the Don River Basin, while other sites preferred their fish with green vegetables. So “hunter-gatherer-fishers were not living on fish alone,” the authors wrote. “They were actively processing and consuming a wide variety of plants.”

PLoS ONE, 2026. DOI: 10.1371/journal.pone.0342740 (About DOIs).

Re-creating the complex cuisine of prehistoric Europeans Read More »

what-we-can-learn-from-scientific-analysis-of-renaissance-recipes

What we can learn from scientific analysis of Renaissance recipes


“a key change in how people constructed knowledge”

Multispectral imaging, proteomics, historical texts yield new insights into 16th-century medical manuals.

Credit: The John Rylands Research Institute and Library, The University of Manchester

Forget “eye of newt and toe of frog/wool of bat and tongue of dog.” People in the 16th century were more akin to DIY scientists than Macbeth’s three witches when it came to concocting home remedies for everything from hair loss and toothache, to kidney stones and fungal infections. Medical manuals targeted to the layperson were hugely popular at the time, according to Stefan Hanss, an early modern historian at the University of Manchester in the UK. “Reader-practitioners” would tinker with the various recipes, tweaking them as needed and making personalized notes in the margins. And they left telltale protein traces behind as they did so.

Hanss is part of an interdisciplinary team of archaeologists, chemists, historians, conservators, and materials scientists who have analyzed trace proteins from the fingerprints of Renaissance people rifling through the pages of medical manuals. The team reported their findings in a paper published in The American Historical Review. It’s the first time researchers have used proteomics to analyze Renaissance recipes, enhanced further by in-depth archival research to place the scientific results in the proper historical context.

“We have so many recipes of that time, [including] cosmetic, medical, and culinary recipes, as well as handwritten recipes passed down for generations,” Hanss told Ars. “It’s really a key element of Renaissance culture, and [the manuscripts] are all covered with scribbled marginalia of [past] users. Experimentation was everywhere. It’s not only about book-learned knowledge but hands-on practical knowledge. It’s a key change in the way people constructed knowledge at that time.”

As previously reported, a number of analytical techniques have emerged over the last few decades to create historical molecular records of the culture in which various artworks were created. For instance, studying the microbial species that congregate on works of art may lead to new ways to slow down the deterioration of priceless aging art. Case in point: Scientists analyzed the microbes found on seven of Leonardo da Vinci’s drawings in 2020 using a third-generation sequencing method known as Nanopore, which uses protein nanopores embedded in a polymer membrane for sequencing. They combined the Nanopore sequencing with a whole-genome-amplification protocol and found that each drawing had its own unique microbiome.

Mass spectrometry-based proteomics is a relative newcomer to the field and is capable of providing a thorough and very detailed characterization of any protein residues present in a given sample, as well as any accumulated damage. The technique is so sensitive that less sample material is needed compared to other methods. And unlike, say, gas chromatography-mass spectrometry, it’s also capable of characterizing all proteins present in a sample (regardless of the complexity of the mixture), rather than being narrowly targeted to predefined proteins. In 2023, scientists used this approach to discover that beer byproducts were popular canvas primers for artists of the Danish Golden Age. Hanss et al. are extending this methodology to Renaissance medical manuals.

A thriving DIY medical marketplace

This latest study has its roots in an event Hanss organized a few years ago called “Microscopic Records,” which brought together experts in various scientific fields and early modern historians. One of the master classes on offer focused on proteomics. Hanss was intrigued when he learned that researchers had extracted proteins from the lower-right and left corners (i.e., where contact occurs when one turns a page) of archived manuscripts in Milan. “I thought, we must have a conversation about doing this for Renaissance recipes,” said Hanss. “We know there was experimentation, but we couldn’t really trace it. This is really the first time that we’ve sampled and identified and contextualized biochemical traces of materials.”

Hanss et al. focused on two 1531 German medical manuals published by 16th-century physician Bartholomäus Vogtherr: How to Cure and Expel All Afflictions and Illnesses of the Human Body and A Useful and Essential Little Book of Medicine for the Common Man. The two tomes are bound together into a single volume and are part of the collection of the John Rylands Research Institute and Library at Manchester. The recipes included domestic remedies for brain disease, infertility, skin disorders, hair loss, wounds, and various other severe illnesses, written in the vernacular and targeted at the common populace.

It was a relatively new genre at the time, per the authors, a kind of everyday DIY science, since the manuals encouraged at-home hands-on experimentation. In 16th-century Augsburg (a printing hub), “experimentation was everywhere,” and the city boasted a thriving medical marketplace. It’s clear that people used the Rylands copies of Vogtherr’s manuals for their own experiments because the margins are filled with scribbled notes and comments dating back to that period.

The first step was to take high-resolution photographs and then run the pages through multispectral imaging (including infrared and UV wavelengths), which helped them recover the most faded, previously illegible handwriting, such as on the inside cover. One scribbled note turned out to be instructions to use a mixture of viola and scorpion oil as a treatment for ulcers. Then they sampled various pages from the manuals for the proteomics analysis, focusing on areas where Renaissance users would be most likely to rest their writing hand or leave fingerprints. That’s also why they avoided the bindings, which are far more likely to be handled by modern-day conservators.

While proteomics cannot establish the dates of specific samples, the team was able to distinguish between contemporary and old peptides based on degree of degradation (such as oxidation). The quantity of peptides detected was also a clue. In fact, the team ended up excluding one of the samples from the final paper because there was such a significantly higher number of peptide results (2,258) than expected, compared to all the other samples (which ranged from 40 to 210 peptides). And for these two particular manuals, “They were in use for more than a hundred years and we know the [users’] names,” said Hanss. “We could make an informed interpretation based on other recipes at the time, and letters exchanged between [Renaissance] medical practitioners.”

The handwritten marginalia are a fascinating window into how people experimented with and tweaked various Renaissance domestic remedies. For those suffering from urinary stones, for instance, a “reader-practitioner” commented that during painful flare-ups, “parsley powdered or soaked in wine” could be effective. There are references to the benefits of broadleaf plantain juice (administered anally), and eating scarlet hawthorn leaves.

The proteomics results confirmed, among other things, the presence of many popular ingredients used in the recipes, such as beech, watercress, and rosemary traces found next to hair loss remedies—commonly attributed to an “overheated brain—along with cabbage and radish oil, chicory, lizards, and, um, human feces. (Just how badly do you want to grow back that thinning hair?) The manuscripts also include recipes for blonde hair dyes. The analysis revealed traces of plants with particularly striking yellow flowers on those pages. “That is a common theme in cosmetic and medical discourse at the time,” said Hanss. “The idea was to look for resemblances between the remedies and what you wish to achieve in terms of the treatment.”

One of the most remarkable results, per Hanss et al., was the recovery of collagen peptides from hippopotamus teeth or bone, pointing to the global circulation of more exotic ingredients in the 16th century. Hippo teeth were said to cure kidney stones and “take away toothache,” and were even used to make dentures.

Hanss et al. also found that several of the proteins they found had antimicrobial functions, such as dermcidin (derived from human sweat glands), which kills E. coli and yeast infections like thrush. The samples also yielded insight into how Renaissance people’s bodies responded to the remedies. Traces of immunoglobulin,  lipocalin, and lysozyme are indicators of an active immune response, for instance.

Hanss is so pleased with these initial results that he hopes to launch a large-scale project to extend this interdisciplinary approach to other collections of medical manuals. He also hopes to further improve the dating methodology. “The ingredients for success are there,” said Hanss. “It’s not only that we found new answers to old questions, but we are now in a position to ask completely new questions.”

The American Historical Review, 2025. DOI: 10.1093/ahr/rhaf405 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

What we can learn from scientific analysis of Renaissance recipes Read More »

research-roundup:-six-cool-science-stories-we-almost-missed

Research roundup: Six cool science stories we almost missed


Smart underwear measures farts, brain cells play Doom, and AI discovers rules of an ancient game.

Illustration of a star that collapsed, forming a black hole. Credit: Keith Miller, Caltech/IPAC – SELab

It’s a regrettable reality that there is never enough time to cover all the interesting scientific stories we come across each month. So every month, we highlight a handful of the best stories that nearly slipped through the cracks. February’s list includes the revival of a forgotten battery design by Thomas Edison that could be ideal for renewable energy storage; a snap-on device to turn those boxers into “smart underwear” to measure how often we fart; and a dish of neurons playing Doom, among other highlights.

Reviving Edison’s battery design

An illustration symbolizes new battery technology: Proteins (red) hold tiny clusters of metal (silver). Each yellow ball in the structures at center represents a single atom of nickel or iron.

Credit: Maher El-Kady/UCLA

Credit: Maher El-Kady/UCLA

At the onset of the 20th century, electric cars powered by lead-acid batteries outnumbered gas-powered cars. The internal combustion engine ultimately won out, in part because those batteries had a range of just 30 miles. But Thomas Edison believed a nickel-iron battery could extend that range to as much as 100 miles, while also having a long life and recharging times of seven hours. An international team of scientists has revived Edison’s concept of a nickel-iron battery and created their own version, according to a paper published in the journal Small.

The team took their inspiration from nature, specifically how shellfish form their hard outer shells and animals form bones: Proteins create a scaffolding onto which calcium compounds cluster. For the battery scaffolding, the authors used beef byproduct proteins, combined with graphene oxide, and then grew clusters of nickel for positive electrons and iron for negative ones. The team superheated all the ingredients in water followed by baking them at very high temperatures. The proteins charred into carbon, stripping away the oxygen atoms in the graphene oxide and embedding the nickel and iron clusters in the scaffolding. Essentially, it became an aerogel.

The folded structure limited the clusters to less than 5 nanometers, translating into significantly more surface area for the chemical reactions fueling the battery to occur. The resulting prototype recharged in mere seconds and endured for more than 12,000 cycles, equivalent to about 30 years of daily recharging. However, their battery’s storage capacity is still well below that of current lithium-ion batteries, so powering EVs might not be the most promising application. The authors suggest it might be ideal for storing excess electricity generated by solar farms or other renewable energy sources.

Small, 2026. DOI: 10.1002/smll.202507934 (About DOIs).

Vanishing star became a black hole

In 2014, NASA’s NEOWISE project picked up a gradual brightening of infrared light coming from a massive star in the Andromeda galaxy, an observation that was confirmed by several other ground- and space-based telescopes. Astronomers kept monitoring the star, so they also noticed when it quickly dimmed in 2016. Once one of the brightest stars in that galaxy, it effectively “vanished” from sight; it would be like Betelgeuse suddenly disappearing. It’s now only detectable in the mid-infrared range.

The obvious explanation was that the star was dying and had collapsed into a black hole, but if so, it didn’t go through the supernova phase that usually occurs with stars of this size. That makes it an intriguing object for further study. After analyzing archival data from NEOWISE, a team of astronomers concluded that this was indeed a case for direct collapse, according to a paper published in the journal Science.

Theoretical work from the 1970s provided a possible explanation. As gravity begins to collapse the star, and the core first forms a dense neutron star, the accompanying burst of neutrinos typically creates a powerful shock wave strong enough to rip apart the core and outer layers, leading to a supernova. But some theorists suggested that the shock wave might not always be powerful enough to expel all that stellar material, which instead falls inward, and the baby neutron star directly collapses into a black hole without ever going supernova.

Convection, it seems, is key. It occurs because the matter near the star’s center is hotter than the outer regions, so the gases move from hotter to cooler regions. The authors of this latest paper suggest that as the core collapses, gas in the outer layers is moving rapidly, which prevents them from falling into the core. The inner layers orbit outside the new black hole and eject the outer layers, which cool and form dust to hide the hot gas still orbiting the black hole. The dust warms in response into mid-infrared wavelengths, giving the object a slight glow that should last for decades.

This work has already led the team to re-evaluate a similar star first observed a decade ago, so this may constitute a new class of objects—ones that are harder to detect because they don’t go supernova and because of the faintness of the afterglow. At least now astronomers know to look for that distinctive signature.

Science, 2026. DOI: 10.1126/science.adt4853 (About DOIs).

Smart undies measure the gas you pass

research team demos a prototype of the Smart Underwear.

Credit: University of Maryland.

Credit: University of Maryland.

Let’s face it, everybody farts, and those suffering from conditions that produce excess gas fart more than most. But physicians don’t have a reliable means of quantifying just how much gas people produce each day. In other words, they lack a baseline of what is normal—like we have for blood glucose or cholesterol—which makes it difficult to determine whether the farting in any given case is excessive. To address this, scientists at the University of Maryland have devised “smart underwear” to measure the wearer’s flatulence, according to a paper published in the journal Biosensors and Bioelectronics.

Brantley Hall and his cohorts developed a small device with electrochemical sensors that snaps onto one’s underwear; those sensors track any emitted farts around the clock, including as the wearer sleeps. In the past, fart frequency relied on small studies using invasive methods or unreliable self-reports. So perhaps it’s not surprising that Hall et al. recorded much higher farting estimates in their study: healthy adults pass gas on average 32 times per day, compared to just 14 times per day reported in past studies.

There was also considerable variation among individuals, with a lowest fart rate of just four times per day and a highest rate of 59 per day. This is a first step to determining a healthy baseline, which the team hopes to do via their Human Flatus Atlas program. People can volunteer to don the smart underwear 24/7 in hopes of correlating the flatulence patterns with diet and microbiome composition across a much larger sample size. You can enroll in the Human Flatus Atlas here; you must live in the US and be 18 years or older to participate. (Fun bonus fact: noted gastroenterologist Michael Levitt was apparently known as the “King of Farts” because of his extensive body of research on the subject.)

Biosensors and Bioelectronics, 2026. DOI: 10.1016/j.biosx.2025.100699 (About DOIs).

Do you wanna build a snowman?

This image was taken by NASA's New Horizons spacecraft on Jan. 1, 2019 during a flyby of Kuiper Belt object 2014 MU69, informally known as Ultima Thule. It is the clearest view yet of this remarkable, ancient object in the far reaches of the solar system – and the first small

Credit: NASA/Public domain

Credit: NASA/Public domain

Just past Neptune lies the Kuiper Belt, a band littered with remnants from the early formative period of our Milky Way, including dwarf planets and smaller bodies known as planetesimals. Roughly 10 percent of those planetesimals consist of two connected spheres resembling a rudimentary snowman, called contact binaries. In a paper published in the Monthly Notices of the Royal Astronomical Society, Michigan State University researchers reported evidence for a process by which these contact binaries may have formed.

Planetesimals are the result of dust and pebbles gradually packing together into aggregate objects in response to gravity, much like forming a snowball. Every now and then, these nascent objects get ripped in two by the rotating cloud and form two separate planetesimals that orbit each other. Most theories of how the unusual snowman-shaped contact binaries formed rely on rare events or exotic phenomena, which would not account for the large number of contact binaries that we observe.

Prior computational simulations modeled colliding objects in the Kuiper Belt as fluid-like blobs that merged into spheres, but this did not result in conditions conducive to forming the snowman configuration. These new simulations retained the colliding objects’ strength and allowed them to rest against each other. This revealed that after two colliding planetesimals begin to orbit one another, gravity causes them to spiral inward until they eventually make contact and fuse. Because the Kuiper Belt is relatively empty, it is rare for the contact binaries to crash into another object, so they are less likely to break apart.

Monthly Notices of the Royal Astronomical Society, 2026. DOI: 10.1073/pnas.1802831115  (About DOIs).

Is this carved rock a Roman board game?

image of a carved rock, he possible game board with pencil marks highlighting the incised lines

Credit: Het Romeins Museum

Credit: Het Romeins Museum

There is archaeological evidence for various kinds of board games from all over the world dating back millennia: Senet and Mehen in ancient Egypt, for example; a strategy game called ludus latrunculorum (“game of mercenaries”) favored by Roman legions; a 4,000-year-old stone board discovered in 2022 that just might be a precursor to an ancient Middle Eastern game known as the Royal Game of Ur; or a Bronze Age board game that might be the earliest form of Hounds and Jackals, originating in Asia, which challenges the longstanding assumption that the game originated in Egypt.

There may be other ancient games that archaeologists still don’t know about, nor is it always possible for them to tease out what the rules of play might be. AI is emerging as a useful tool for determining the latter. Most recently, researchers have used AI tools to work out the rules of what they believe might be another ancient Roman game board, according to a paper published in the journal Antiquity. The object in question is a flat stone housed in the Roman Museum in Heerlen, the Netherlands, with a distinctive geometric pattern carved on one side. Walter Crist of Leiden University noticed some visibly uneven wear consistent with pushing stone game pieces across the surface, with the most wear along one particular diagonal line.

Crist thought this might be a Roman game board and decided to pit two AI agents against each other in thousands of “games” to test different variations in possible rules, gleaned from known ancient board games from around the world. Crist and his co-authors identified nine possibilities, all so-called blocking games, in which a player with more pieces tries to stop their opponent from moving. They have dubbed this potentially new game Ludos Coriovalli. There is not yet any means of knowing for sure, since no other carved slabs with that particular pattern have been found, but it might be a prototype game, per Crist.

Antiquity, 2026. DOI: 10.15184/aqy.2025.10264 (About DOIs).

Brain cells in a dish play Doom

In 2022, a company called Cortical Labs managed to get brain cells grown in a dish—dubbed DishBrain—electrically stimulated in such a way as to create useful feedback loops, enabling them to “learn” to play Pong, albeit badly. This provided intriguing evidence that neural networks formed from actual neurons spontaneously develop the ability to learn. Now the company is back with a video (see above) showing DishBrain playing Doom—technically the open-sourced Freedoom, which lacks some of the copyrighted demon and weapon elements.

Like four years ago, we’re talking about a dish with a set of electrodes on the floor. When neurons are grown in the dish, these electrodes can do two things: sense the activity of the neurons above them or stimulate those electrodes. But the team has added a new interface that makes the system easier to program, using Python. Teaching DishBrain to play Pong took years of painstaking effort; getting it to play Freedoom took just one week—a significant improvement.

DishBrain still can’t come close to matching the performance of the best Doom players, but it learned faster than conventional silicon-based machine learning. But it’s also not comparable to a human brain. “Yes, it’s alive, and yes, it’s biological, but really what it is being used as is a material that can process information in very special ways that we can’t re-create in silicon,” Brett Kagan of Cortical Labs told New Scientist. In fact, in 2024, scientists taught hydrogels—soft, flexible biphasic materials that swell but do not dissolve in water—to play Pong, inspired by the company’s earlier research. (Hydrogels can also “learn” to beat in rhythm with an external pacemaker, just like living cells.)

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: Six cool science stories we almost missed Read More »

the-strange-animals-that-control-their-body-heat

The strange animals that control their body heat


Some creatures can dramatically alter their internal temperature and outlast storms, floods and, predators

An edible dormouse. Credit: DeAgostini/Getty Images

In 1774, British physician-scientist Charles Blagden received an unusual invitation from a fellow physician: to spend time in a small room that was hotter, he wrote, “than it was formerly thought any living creature could bear.”

Many people may have been appalled by this offer, but Blagden was delighted by the opportunity for self-experimentation. He marveled as his own temperature remained at 98° Fahrenheit (approximately 37° Celsius), even as the temperature of the room approached 200°F (about 93°C).

Today, this ability to maintain a stable body temperature—called homeothermy—is known to exist among myriad species of mammals and birds. But there are also some notable exceptions. The body temperature of the fat-tailed dwarf lemur, for example, can fluctuate by nearly 45°F (25°C) over a single day.

In fact, a growing body of research suggests that many more animals than scientists once appreciated employ this flexible approach—heterothermy—varying their body temperature for minutes, hours, or weeks at a time. This may help the animals to persist through all sorts of dangers.

“Because we’re homeotherms, we assume all mammals work the way we do,” says Danielle Levesque, a mammalian ecophysiologist at the University of Maine. But in recent years, as improvements in technology allowed researchers to more easily track small animals and their metabolisms in the wild, “we’re starting to find a lot more weirdness,” she says.

The most extreme—and well-known—form of heterothermy is classic hibernation, which has been most extensively studied in critters who use it to save energy and so survive the long, cold winters of the Northern Hemisphere. These animals enter long periods of what scientists call deep torpor, when metabolism slows to a crawl and body temperature can drop to just above freezing.

But hibernation is just one end of what some scientists now consider a spectrum. Many mammals can deploy shorter bouts of shallow torpor—loosely defined as smaller reductions in metabolism and smaller fluctuations in body temperature—as the need arises, suggesting that torpor has more functions than scientists previously realized.

“It’s extremely complicated,” says comparative physiologist Fritz Geiser of the University of New England in Australia. “It’s much more interesting than homeothermy.”

Australian eastern long-eared bats, for example, adjust their torpor use based on day-to-day changes in weather conditions. Mari Aas Fjelldal, a bat biologist at the Norwegian University of Life Sciences and the University of Helsinki, used tiny transmitters to measure skin temperatures as 37 free-ranging bats in Australia went about their daily lives. Like many heterothermic species, the bats spent more time in torpor when it was cold, but they also sank into torpor more often as rain and wind speeds picked up, Fjelldal and colleagues reported in Oecologia in 2021. This hunkering down makes sense, says Fjelldal: Wind and rain make flying more energetically demanding—a big problem when you weigh less than a small packet of M&M’s—and make it more costly to find the insects the bats eat.

There are even reports of pregnant hoary bats entering torpor during unpredictable spring storms, a physiological maneuver that basically pauses their pregnancies. “It means that they can, to some degree, actually decide a bit when to give birth,” says Fjelldal, “which is really handy when you’re living in an environment that can be quite harsh in the spring.” Fjelldal, who wasn’t involved in that study, notes that producing milk is expensive metabolically, so it’s advantageous to give birth when food availability is good.

Other animals, like sugar gliders—tiny, pink-nosed marsupials that “fly” through the trees using wing-like folds of skin—rarely use torpor but seem able to take advantage of it in the case of major weather emergencies. During a storm with category 1 cyclone winds of nearly 100 kilometers per hour and 9.5 centimeters of rain falling in a single night, the gliders were more likely to stay cuddled up in their tree-hole nests, and many entered torpor, reducing body temperature from 94.1°F (34.5°C) to an average of about 66°F (19°C), Geiser and colleagues found.

Similarly, in response to an accidental flooding event in the lab, researchers observed a highly unusual period of multiday torpor in a golden spiny mouse, its temperature reaching a low of about 75°F (24°C).

This more flexible use of torpor can help heterotherms wait out a catastrophe, Geiser says. In contrast, homeothermic species can’t just dial back their need for food and water and may not be able to outlast challenging conditions.

“Maybe there’s no food, maybe no water, it may be really warm,” says ecophysiologist Julia Nowack of Liverpool John Moores University in England, a coauthor on the sugar glider study. Torpor, especially in the tropics, has “lots of different triggers.”

Threats of a different sort, such as the presence of predators, can also prompt hunkering down. The (perhaps perfectly named) edible dormouse, for example, sometimes enters long periods of torpor in early summer. At first, this behavior puzzled researchers—why snooze away the summer, when temperatures are comfortable and food abundant, especially if it meant forgoing the chance to reproduce?

After looking at years of data collected by various scientists, a pair of researchers concluded that because spring and early summer are especially active periods for owls, these small snackable critters were likely opting to spend their nights torpid, safely hidden in underground burrows, to avoid becoming dinner. In what is thought to be a similar strategy to avoid nocturnal predators, Fjelldal’s bats alter their torpor use slightly depending on the phase of the moon, spending more time torpid as the moon grows fuller and they become easier to spot.

The fat-tailed dunnart, a mouse-like carnivorous marsupial native to Australia, is a third species to lie low when it feels more at risk of being eaten. In one study, researchers placed dunnarts in two types of enclosures: Some had lots of ground cover in the form of plastic sheeting, simulating an environment protected from predators, while other enclosures had little cover, simulating a greater risk of predation. In the higher-risk settings, the animals foraged less and their body temperatures became more variable.

Levesque, who has studied similar non-torpor temperature flexibility in large tree shrews, says that even small variations in body temperature can be important for saving water and energy.

Indeed, water loss during hot weather can pose serious risks to many mammals, and heterothermy is an important conservation tool for some. As Blagden observed, people are marvelously capable of maintaining stable temperatures even in horrifically hot environments, due in large part to our sweating abilities. But this isn’t necessarily a good strategy for smaller mammals—such evaporative cooling in a sweltering climate can quickly lead to dehydration.

Instead, creatures like Madagascar’s leaf-nosed bats use torpor. On warm days, the bats enter mini bouts of torpor lasting just a few minutes. But during especially hot days, the bats become torpid for up to seven hours, reducing their metabolism to less than 25 percent of normal and allowing their body temperature to rise as high as 109.2°F (42.9°C). And in an experiment with ringtail possums, slightly raising their body temperature by about 3°C (5.4°F) during a simulated heat wave saved the animals an estimated 10 grams of water per hour — a lot for a creature weighing less than 800 grams.

This heterothermic way of life gives some animals a bit of a buffer when it comes to coping with variability in their environments, says physiological ecologist Liam McGuire of the University of Waterloo in Ontario, Canada. But it can only do so much, he says; heterothermy is unlikely to exempt them from the challenge of rapidly evolving weather conditions brought by climate change.

As for Blagden, he saw the human body as remarkable in its capacity to maintain a steady temperature, even by “generating cold” when ambient temperatures climbed too high. Today, however, scientists are beginning to appreciate that for many mammals, allowing body temperature to be a bit more flexible may be key to survival as well.

This story originally appeared at Knowable Magazine

Photo of Knowable Magazine

Knowable Magazine explores the real-world significance of scholarly work through a journalistic lens.

The strange animals that control their body heat Read More »

neanderthals-seemed-to-have-a-thing-for-modern-human-women

Neanderthals seemed to have a thing for modern human women

By now, it’s firmly established that modern humans and their Neanderthal relatives met and mated as our ancestors expanded out of Africa, resulting in a substantial amount of Neanderthal DNA scattered throughout our genome. Less widely recognized is that some of the Neanderthal genomes we’ve seen have pieces of modern human DNA as well.

Not every modern human has the same set of Neanderthal DNA, however; different people will, by chance, have inherited different fragments. But there are also some areas, termed “Neanderthal deserts,” where none of the Neanderthal DNA seems to have persisted. Notably, the largest Neanderthal desert is the entire X chromosome, raising questions about whether this reflects the evolutionary fitness of genes there or mating preferences.

Now, three researchers at the University of Pennsylvania, Alexander Platt, Daniel N. Harris, and Sarah Tishkoff, have done the converse analysis: examining the X chromosomes of the handful of completed Neanderthal genomes we have. It turns out there’s also a strong bias toward modern human sequences there, as well, and the authors interpret that as selective mating, with Neanderthal males showing a strong preference for modern human females and their descendants.

What type of selection are we looking at?

Given how long modern humans and Neanderthals had been evolving as separate populations, some degree of genetic incompatibility is definitely possible. Lots of proteins interact in various ways, and the genes behind these interaction networks will evolve together—a change in one gene will often lead to compensatory changes in other genes in the network. Over time, those changes may mean re-introducing the original gene will actually disrupt the network, with a negative impact on fitness.

That means the introduction of some Neanderthal genes into the modern human genome (or vice versa) would be disruptive and make carriers of them less fit. So they’d be selected against and lost over the ensuing generations. Of course, some segments would likely be lost at random—the genome’s pretty big, and the modern human population was likely large and growing, allowing its DNA to dilute out the influence of other human populations. Figuring out which influence is dominant can be challenging.

Neanderthals seemed to have a thing for modern human women Read More »

photons-that-aren’t-actually-there-influence-superconductivity

Photons that aren’t actually there influence superconductivity

Despite the headline, this isn’t really a story about superconductivity—at least not the superconductivity that people care about, the stuff that doesn’t require exotic refrigeration to work. Instead, it’s a story about how superconductivity can be used as a test of some of the weirder consequences of quantum mechanics, one that involves non-existent particles of light that still act as if they exist.

Researchers have found a way to get these virtual photons to influence the behavior of a superconductor, ultimately making it worse. That may, in the end, tell us something useful about superconductivity, but it’ll probably take a little while.

Virtual reality

The story starts with quantum field theory, which is incredibly complex, but the simplified version is that even empty space is filled with fields that could govern the interactions of any quantum objects in or near that space. You can think of different particles as energetic excitements of these fields—so a photon is simply an energetic state of the quantum field.

Some of these particles have real existences we can track, like a photon emitted by a laser and absorbed by a detector some distance away. But the quantum field also allows for virtual photons, which simply act to transmit the electromagnetic force between particles. We can’t really directly detect these, but we can definitely track their effects.

One of the stranger consequences of this is that locations that have a strong electromagnetic field can be filled with virtual photons even when no real ones are present.

Which brings us to one of the materials central to the new work: boron nitride. Like the more famous graphene, boron nitride forms a series of interlinked hexagonal rings, extending out into macroscopic sheets. The bulk material is made of sheets layered onto sheets layered onto yet more sheets. This has an effect on light transiting through the material. In one direction, the light will simply slam into the material, getting absorbed or scattered. But if it’s oriented along the plane of the sheets, it’s possible for the light to travel in the space between the boron and nitrogen atoms.

Photons that aren’t actually there influence superconductivity Read More »

the-physics-of-squeaking-sneakers

The physics of squeaking sneakers

We’re all familiar with the high-pitched squeak of basketball shoes on the court during games, or tires squealing on pavement. Scientists conducted several experiments and discovered that the geometry of the sneakers’ tread patterns determines the squeak’s frequency, enabling the team to make rubber blocks set to specific frequencies and slide them across glass surfaces to play Star Wars’ “Imperial March.”

“Tuning frictional behavior on the fly has been a long-standing engineering dream,” said co-author Katia Bertoldi of Harvard University. “This new insight into how surface geometry governs slip pulses paves the way for tunable frictional metamaterials that can transition from low-friction to high-grip states on demand.” In addition, the dynamics revealed by these results are similar to those of tectonic faults and thus give scientists a new model for the mechanics of earthquakes, according to their new paper published in the journal Nature.

Leonardo da Vinci is usually credited with conducting the first systematic study of friction in the late 15th century, a subfield now known as tribology that deals with the dynamics of interacting surfaces in relative motion. Da Vinci’s notebooks depict how he pulled rows of blocks using weights and pulleys, an approach that is still used in frictional studies today, as well as examining the friction produced in screw threads, wheels, and axles. The authors of this latest paper used an experimental setup similar to da Vinci’s.

The squeaking of sneakers on a gym floor is usually attributed to friction, specifically a stick-slip variety that involves cycles of sticking and sliding between two surfaces. But that model is best suited for interfaces involving two rigid objects, such as squeaking door hinges. Sneaker soles sliding across a gym floor involves one hard object (the floor) and one soft one (the sneaker sole). Bertholdi et al. wanted a more complete understanding of the dynamics of soft-on-rigid interfaces.

First, the team slid commercial basketball shoes (the Nike CU3503-100) across a smooth, dry glass plate, simultaneously capturing sound and visual imagery of what was happening between the sole and the glass (i.e., the frictional interface). They identified opening pulses traveling in the sliding direction non-uniformly, resulting in temporary local supersonic separations between the shoe soles and the glass plate. Those audible squeaks aren’t random; the frequency is determined by the repetition rate of the generated pulses.

The physics of squeaking sneakers Read More »

boozy-chimps-fail-urine-test,-confirm-hotly-debated-theory

Boozy chimps fail urine test, confirm hotly debated theory

The urine of chimpanzees contains high levels of alcohol byproduct, most likely because the chimps regularly gorge themselves on fermented fruit, according to a new paper published in the journal Biology Letters. It’s the latest evidence in support of a hotly debated theory regarding the evolutionary origins of human fondness for alcohol.

As previously reported, in 2014, University of California, Berkeley (UCB) biologist Robert Dudley wrote a book called The Drunken Monkey: Why We Drink and Abuse Alcohol. His controversial “drunken monkey hypothesis” proposed that the human attraction to alcohol goes back about 18 million years, to the origin of the great apes, and that social communication and sharing food evolved to better identify the presence of fruit from a distance. At the time, skeptical scientists insisted that this was unlikely because chimpanzees and other primates just don’t eat fermented fruit or nectar.

But reports of primates doing just that have grown over the ensuing two decades. Earlier this year, we reported that researchers had caught wild chimpanzees on camera engaging in what appears to be sharing fermented African breadfruit with measurable alcoholic content. That observational data was the first evidence of the sharing of alcoholic foods among nonhuman great apes in the wild. The authors measured the alcohol content of the fruit with a handy portable breathalyzer and found almost all of the fallen fruit (90 percent) contained some ethanol, with the ripest containing the highest levels—the equivalent of 0.61 percent ABV (alcohol by volume).

And last September, Dudley co-authored a paper reporting the first measurements of the ethanol content of fruits favored by chimps in the Ivory Coast and Uganda, finding that chimps consume 14 grams of alcohol per day, the equivalent of a standard alcoholic drink in the US. After adjusting for the chimps’ lower body mass, the authors concluded the chimps are consuming nearly two drinks per day.

A thankless task

The next step was to sample the chimps’ urine to see if it contains any alcohol metabolites, as was found in a 2022 study on spider monkeys. This would further refine estimates for how much ethanol-laden fruit the chimps eat every day. That thankless task fell to Aleksey Maro, a UCB graduate student who spent last summer in Ngogo, sleeping in trees—protected from the constant streams by an umbrella—to collect urine samples. Sharifah Namaganda, a Ugandan graduate student at the University of Michigan, showed him how to make shallow bowls out of plastic bags hung on forked twigs for more efficient collection. He also collected samples from puddles of urine on the forest floor.

Boozy chimps fail urine test, confirm hotly debated theory Read More »

following-35%-growth,-solar-has-passed-hydro-on-us-grid

Following 35% growth, solar has passed hydro on US grid

On Tuesday, the US Energy Information Administration released full-year data on how the country generated electricity in 2025. It’s a bit of a good news/bad news situation. The bad news is that overall demand rose appreciably, and a fair chunk of that was met by additional coal use. On the good side, solar continued its run of astonishing growth, generating 35 percent more power than a year earlier and surpassing hydroelectric power for the first time.

Shifting markets

Overall, electrical consumption in the US rose by 2.8 percent, or about 121 terawatt-hours. Consumption had been largely flat for several decades, with efficiency and the decline of industry offsetting the effects of population and economic growth. There were plenty of year-to-year changes, however, driven by factors ranging from heating and cooling demand to a global pandemic. Given that history, the growth in demand in 2025 is a bit concerning, but it’s not yet a clear signal that the factors that will inevitably drive growth have kicked in.

(These factors include things like the switch to heat pumps, the electrification of transportation, and the growth in data centers. While the first two of those involve a more efficient use of energy overall, they involve electricity replacing direct use of fossil fuels, and so will increase demand on the grid.)

The story of the year is how that demand was met. If demand grows more slowly, the additional 85 terawatt-hours generated by expanded utility-scale and small solar installations would have easily met it. As it was, the growth of utility-scale solar was only sufficient to cover about two-thirds of the rising demand (or 73 percent if you include wind power). With no new nuclear plants on the horizon, the alternative was to meet it with fossil fuels.

Following 35% growth, solar has passed hydro on US grid Read More »

scientists-crack-the-case-of-“screeching”-scotch-tape

Scientists crack the case of “screeching” Scotch tape

In 1953, Russian scientists peeling Scotch tape in a vacuum reported detecting electrons with sufficient energy to emit X-rays. Other scientists were skeptical, but this phenomenon was finally confirmed in 2008, when UCLA physicists produced X-rays while unwinding a roll of Scotch tape in a vacuum chamber. The goal was to harness triboluminescence for X-ray imaging, and the team produced a low-quality X-ray image of a lab member’s finger (see image below). Fortunately, this only works in a perfect vacuum, so everyday Scotch tape users are safe.

A shock to the system

X-ray images of a human finger taken with peeling tape

X-ray images of a human finger taken with peeling tape.

X-ray images of a human finger taken with peeling tape. Credit: Carlos G. Camara et al., 2008

Peeling Scotch tape produces sound as well as light, typically attributed to the slip-stick mechanism at play during the peeling process. In 2010, co-author Sigurdur Thoroddsen of King Abdullah University in Saudi Arabia and colleagues used ultra-fast imaging to identify a crucial micro-fracture phenomenon of the slip mechanism: a sequence of transverse cracks that travel across the width of the adhesive at supersonic speeds. A follow-up 2024 study found a direct correspondence between the screeching sound and those transverse cracks, but did not identify a mechanism.

That is the purpose of this latest study. Thoroddsen et al. wondered whether the sound was directly generated by a crack’s rapidly moving tip, which would also produce the distinctive discrete sound wave pulses associated with peeling Scotch tape. The authors experimentally tested their hypothesis by conducting simultaneous high-speed imaging of the propagating fractures and the sound waves traveling in the air. They manually unpeeled Scotch tape using a metal rod, capturing the cracks with two video cameras and the sound with two microphones synchronized to the video camera, the better to pinpoint the origin of the pressure pulses.

Their results showed that the screeching arises from a train of weak shocks that culminate when the transverse cracks reach the edge of the tape. The supersonic speed at which they travel, relative to the surrounding air, is crucial to the generation of those shockwaves. “A partial vacuum is produced between the tape and the solid when the crack opens,” the authors explained. “The crack moves too fast for this void to be filled immediately, even though air is sucked in from the direction perpendicular to the crack. The void therefore moves with the crack until it reaches the end of the tape and collapses into the stationary air outside.” Each time a fracture tip reaches the edge of the tape, it generates a sound pulse—hence the telltale screech.

DOI: Physical Review E, 2026. 10.1103/p19h-9ysx  (About DOIs).

Scientists crack the case of “screeching” Scotch tape Read More »

pentagon-buyer:-we’re-happy-with-our-launch-industry,-but-payloads-are-lagging

Pentagon buyer: We’re happy with our launch industry, but payloads are lagging


“The point is to get missions out the door as fast as possible. Two to three years is too slow.”

Maj. Gen. Stephen Purdy oversees the Space Force’s acquisition programs at the Pentagon. Credit: Jonathan Newton/The Washington Post via Getty Images

DALLAS—The Space Force officer tasked with overseeing more than $24 billion in research and development spending says the Pentagon is more interested in supporting startups building new space sensors and payloads than adding yet another rocket company to its portfolio.

The statement, made at a space finance conference in Dallas last week, was one of several points Maj. Gen. Stephen Purdy wanted to get across to a room full of investors and commercial space executives.

The other points on Purdy’s agenda were that the Space Force is more interested in high-volume production than spending money to develop the latest technologies, and that the military has, at least for now, lost one of its most important tools for supporting and diversifying the space industrial base.

The rhetoric around prioritizing payloads over launchers aligns with the Space Force’s recent history of supporting small startups. Since 2020, SpaceWERX, the Space Force’s commercial innovation program, has awarded 23 funding agreements—called Strategic Funding Increases (STRATFIs)—to commercial space startups developing new sensors, software, satellite components, spacecraft buses, and orbital transfer vehicles. SpaceWERX awarded a single STRATFI agreement to a launch company—ABL Space Systems—and that firm has since exited the space launch market.

“We’re on path for mass-produced launch,” said Purdy, the military deputy for space acquisition in the Department of the Air Force. “We have got our ranges situated so we can do mass-produced launch. We’ve got our data centers and our data structure for mass-production. We’ve got AI pieces that are mass-produced, satellite buses are nearly there, and our payloads are the last element. Payloads at mass-produced affordability, at scale, is the key element.”

K2’s Gravitas satellite, set for launch next month, will test the company’s Hall-effect thruster, solar arrays, and other systems.

Credit: K2

K2’s Gravitas satellite, set for launch next month, will test the company’s Hall-effect thruster, solar arrays, and other systems. Credit: K2

Putting the money in

Payloads, Purdy told Ars after his talk, are “the last frontier” for scaling space missions. “The point is to get missions out the door as fast as possible. Two to three years is too slow. We’ve got to get down to one week. I’m not talking about super exquisite [payloads]. That’s not most of our missions. The commercial industry, your Kuipers [Amazon LEO], your Starlinks, have sort of got the comm piece down, but we’re still struggling in a lot of other stuff.”

One kind of payload Purdy identified was infrared sensors. Infrared sensors often come with cryocoolers to chill detectors to temperatures cold enough to provide sensitivity to faint targets, such as distant missile plumes, fires, explosions, or other objects in space. The technology isn’t as eye-catching as a rocket launch, but it will be key to many Space Force programs, including the Golden Dome missile defense shield backed by the Trump administration.

“I remain convinced that we’re going to think about the mission that we need, and we’re going to need satellites out the door and launched and in orbit within the week, at scale,” Purdy said. “I’m very convinced that that’s the path that we’re going to move down on the commercial and government side.”

The companies that come closest to that pace of satellite manufacturing are the ones Purdy mentioned: SpaceX’s Starlink and the Amazon LEO broadband networks. SpaceX and Amazon produce multiple satellites per day, but the spacecraft are identical. The Space Force needs plenty of rockets and communications satellites, but it also needs payloads and sensors to ride those launch vehicles and produce the data to be routed through relay stations in orbit.

Before President Trump ever uttered the words “Golden Dome,” the Space Force’s Space Development Agency was already striving to deploy a network of at least several hundred government-owned missile-detection, tracking, and data-relay satellites. Those satellites have suffered delays due to supply chain issues, particularly long lead times and delays in satellite buses, infrared payloads, laser communication terminals, and radiation-hardened processors.

Singing the blues

But the Space Force has lost access to one of the tools it used to help solve these problems. Many space mission components come from small businesses, and some parts come from overseas. The Space Force used STRATFIs, Small Business Innovation Research (SBIR), and Small Business Technology Transfer (STTR) grants to pay companies for basic research, experimentation, and scaling up manufacturing capacity. STRATFIs, SBIRs, and STTRs provided seed funding for high-risk, high-reward research and development.

Congress last year failed to reauthorize these programs, which are also used by NASA and other federal agencies. Opponents to a clean extension wanted legislation to cap how much funding can go to each grant recipient.

“I’ve got to get SBIRs and STRATFIs reauthorized, so I need the community’s help to get that done,” Purdy said. “There are some valid concerns that need to be addressed. All that needs to be addressed, but it affects the space industrial base a lot more than the other areas, and so I need everyone to kind of pile on and help get that done.”

Purdy took a victory lap by listing several STRATFIs that have, so far, yielded major results, at least for investors. K2 Space, a company developing high-power, low-cost satellite platforms, received $30 million in funding from the Space Force and Air Force in 2024. A year later, K2 closed a $250 million fundraising round at a company valuation of $3 billion. Apex Space, another startup looking to scale satellite manufacturing, received $11 million in strategic funding in 2024. A year later, Apex became a unicorn, exceeding a valuation of $1 billion. Impulse Space, which is working on in-space propulsion, received a STRATFI funding commitment from the Pentagon in 2024, helping propel the startup to a valuation of $1.8 billion.

“Years of SBIRs and STRATFIs have set the stage … We’ve been doing that for three or four or five years, we’ve produced a nice pool of 60 or 70 different companies that can help bid on all our upcoming new contracts, which is really nice,” Purdy said.

Under the Trump administration, the Defense Department has taken more steps to get cash in the hands of defense contractors. The Pentagon announced last month a $1 billion “direct-to-supplier” investment in L3Harris to expand production capacity of US solid rocket motors. This gives the federal government a direct equity stake in L3Harris’s missile business.

A Trump executive order last month also excoriated the defense industry for ballooning executive salaries, stock buybacks, and systemic lethargy. “You see some strong language through executive order and other mechanisms to say, ‘Hey, companies, you need to put in more CapEx yourselves. You need to kick in more yourselves.’ We’re no longer just going to provide you billions of dollars just for you to go build buildings,” Purdy said.

“And there’s some threat language on the back end of that. You’re going to do that, or else we’re going to start cutting you off. We’re going to start looking at other providers. That’s out in the open and subject for debate. But there’s a big carrot coming along with that, and that’s multi-year procurements. Multi-year procurements are the carrot to allow the investing community to have some amount of confidence,” Purdy continued.

“We’re not looking to be your R&D arm.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Pentagon buyer: We’re happy with our launch industry, but payloads are lagging Read More »