Biology

new-dinosaur-species-is-the-punk-rock-version-of-an-ankylosaur

New dinosaur species is the punk rock version of an ankylosaur

And we have known for sure that the armor was around back then, given that we’ve found the skin-derived osteoderms that comprise the armor in Jurassic deposits. But with little more than a rib and a handful of mouth parts to go on, it wasn’t really possible to say much more than that.

Until now, that is. Because the new Spicomellus remains show extremely clearly that the armor of ankylosaurs got less elaborate over time.

The small, solid-looking spikes found along the edges of later ankylosaurs? Forget those. Spicomellus had a back that was probably bristling with sharper spines, along with far larger ones along its outer edges. Each rib appears to have generated as many as six individual spikes. At a handful of locations, these spikes extended out to nearly a meter, looking more like lances than anything needed to ward off a close-in attack.

And the largest of these were along its neck. On the upper surface of its neck, several osteoderms fused to form a massive half-collar of bone and then extended out five or more individual spikes, each among the longest on the animal’s body. And there were three of these structures along the neck. “No known ankylosaur possesses any condition close to the extremely long pairs of spines on the cervical half-ring of Spicomellus,” its discoverers note.

As if its hedgehog-on-acid appearance weren’t enough, handles present on the tail vertebrae suggest that it also had a weaponized tail. All told, the researchers sum things up by saying, “The new specimen reveals extreme dermal armour modifications unlike those of any other vertebrate, extinct or extant, which fall far outside of the range of morphologies shown by other armoured dinosaurs.”

Out go the hypotheses

Because it’s so unusual, the skeleton’s characteristics are difficult to place within a neat family tree of the ankylosaurs. The researchers conclude that some details of its skeleton do suggest Spicomellus groups among the ankylosaurs and conclude that it’s probably an early branch from the main lineage. But without any other significant examples from the lineage at that time, it’s an extremely tentative conclusion. Still, the alternative is that this thing is unrelated to the only other organisms that share at least a few of its bizarre features, which is a difficult idea to swallow.

New dinosaur species is the punk rock version of an ankylosaur Read More »

an-inner-speech-decoder-reveals-some-mental-privacy-issues

An inner-speech decoder reveals some mental privacy issues

But it struggled with more complex phrases.

Pushing the frontier

Once the mental privacy safeguard was in place, the team started testing their inner speech system with cued words first. The patients sat in front of the screen that displayed a short sentence and had to imagine saying it. The performance varied, reaching 86 percent accuracy with the best performing patient and on a limited vocabulary of 50 words, but dropping to 74 percent when the vocabulary was expanded to 125,000 words.

But when the team moved on to testing if the prosthesis could decode unstructured inner speech, the limitations of the BCI became quite apparent.

The first unstructured inner speech test involved watching arrows pointing up, right, or left in a sequence on a screen. The task was to repeat that sequence after a short delay using a joystick. The expectation was that the patients would repeat sequences like “up, right, up” in their heads to memorize them—the goal was to see if the prosthesis would catch it. It kind of did, but the performance was just above chance level.

Finally, Krasa and his colleagues tried decoding more complex phrases without explicit cues. They asked the participants to think of the name of their favorite food or recall their favorite quote from a movie. “This didn’t work,” Krasa says. “What came out of the decoder was kind of gibberish.”

In its current state, Krasa thinks, the inner speech neural prosthesis is a proof of concept. “We didn’t think this would be possible, but we did it and that’s exciting! The error rates were too high, though, for someone to use it regularly,” Krasa says. He suggested the key limitation might be in hardware—the number of electrodes implanted in the brain and precision with which we can record the signal from the neurons. Inner speech representations might also be stronger in other brain regions than they are in the motor cortex.

Krasa’s team is currently involved in two projects that stemmed from the inner speech neural prosthesis. “The first is asking the question [of] how much faster an inner speech BCI would be compared to an attempted speech alternative,” Krasa says. The second one is looking at people with a condition called aphasia, where people have motor control of their mouths but are unable to produce words. “We want to assess if inner speech decoding would help them,” Krasa adds.

Cell, 2025.  DOI: 10.1016/j.cell.2025.06.015

An inner-speech decoder reveals some mental privacy issues Read More »

for-some-people,-music-doesn’t-connect-with-any-of-the-brain’s-reward-circuits

For some people, music doesn’t connect with any of the brain’s reward circuits

“I was talking with my colleagues at a conference 10 years ago and I just casually said that everyone loves music,” recalls Josep Marco Pallarés, a neuroscientist at the University of Barcelona. But it was a statement he started to question almost immediately, given there were clinical cases in psychiatry where patients reported deriving absolutely no pleasure from listening to any kind of tunes.

So, Pallarés and his team spent the past 10 years researching the neural mechanisms behind a condition they called specific musical anhedonia: the inability to enjoy music.

The wiring behind joy

When we like something, it is usually a joint effect of circuits in our brain responsible for perception—be it perception of taste, touch, or sound—and reward circuits that give us a shot of dopamine in response to nice things we experience. For a long time, scientists attributed a lack of pleasure from things most people find enjoyable to malfunctions in one or more of those circuits.

You can’t enjoy music when the parts of the brain that process auditory stimuli don’t work properly, since you can’t hear it in the way that you would if the system were intact. You also can’t enjoy music when the reward circuit refuses to release that dopamine, even if you can hear it loud and clear. Pallarés, though, thought this traditional idea lacked a bit of explanatory power.

“When your reward circuit doesn’t work, you don’t experience enjoyment from anything, not just music,” Pallarés says. “But some people have no hearing impairments and can enjoy everything else—winning money, for example. The only thing they can’t enjoy is music.”

For some people, music doesn’t connect with any of the brain’s reward circuits Read More »

mammals-that-chose-ants-and-termites-as-food-almost-never-go-back

Mammals that chose ants and termites as food almost never go back

Insects are more influential than we realize

By showing that ant- and termite-based diets evolved repeatedly, the study highlights the overlooked role of social insects in shaping biodiversity. “This work gives us the first real roadmap, and what really stands out is just how powerful a selective force ants and termites have been over the last 50 million years, shaping environments and literally changing the face of entire species,” Barden said.

However, according to the study authors, we still do not have a clear picture of how much of an impact insects have had on the history of life on our planet. Lots of lineages have been reshaped by organisms with outsize biomass—and today, ants and termites have a combined biomass exceeding that of all living wild mammals, giving them a massive evolutionary influence.

However, there’s also a flip side. Eight of the 12 myrmecophagous origins are represented by just a single species, meaning most of these lineages could be vulnerable if their insect food sources decline. As Barden put it, “In some ways, specializing in ants and termites paints a species into a corner. But as long as social insects dominate the world’s biomass, these mammals may have an edge, especially as climate change seems to favor species with massive colonies, like fire ants and other invasive social insects.”

For now, the study authors plan to keep exploring how ants, termites, and other social insects have shaped life over millions of years, not through controlled lab experiments, but by continuing to use nature itself as the ultimate evolutionary archive. “Finding accurate dietary information for obscure mammals can be tedious, but each piece of data adds to our understanding of how these extraordinary diets came to be,” Vida argued.

Evolution, 2025. DOI: 10.1093/evolut/qpaf121 (About DOIs)

Rupendra Brahambhatt is an experienced journalist and filmmaker. He covers science and culture news, and for the last five years, he has been actively working with some of the most innovative news agencies, magazines, and media brands operating in different parts of the globe.

Mammals that chose ants and termites as food almost never go back Read More »

betel-nuts-have-been-giving-people-a-buzz-for-over-4,000-years

Betel nuts have been giving people a buzz for over 4,000 years

Ancient rituals and customs often leave behind obvious archaeological evidence. From the impeccably preserved mummies of Egypt to psychoactive substance residue that remained at the bottom of a clay vessel for thousands of years, it seems as if some remnants of the past, even if not all are immediately visible, have defied the ravages of time.

Chewing betel nuts is a cultural practice in parts of Southeast Asia. When chewed, these reddish nuts, which are the fruit of the areca palm, release psychoactive compounds that heighten alertness and energy, promote feelings of euphoria, and help with relaxation. They are usually wrapped in betel leaves with lime paste made from powdered shells or corals, depending on the region.

Critically, the ancient teeth from betel nut chewers are distinguishable because of red staining. So when archaeologist Piyawit Moonkham, of Chiang Mai University in Thailand, unearthed 4,000-year-old skeletons from the Bronze Age burial site of Nong Ratchawat, the lack of telltale red stains appeared to indicate that the individuals they belonged to were not chewers of betel nuts.

Yet when he sampled plaque from the teeth, he found that several of the teeth from one individual contained compounds found in betel nuts. This invisible evidence could indicate teeth cleaning practices had gotten rid of the color or that there were alternate methods of consumption.

“We found that these mineralized plaque deposits preserve multiple microscopic and biomolecular indicators,” Moonkham said in a study recently published in Frontiers. “This initial research suggested the detection potential for other psychoactive plant compounds.”

Since time immemorial

Betel nut chewing has been practiced in Thailand for at least 9,000 years. During the Lanna Kingdom, which began in the 13th century, teeth stained from betel chewing were considered a sign of beauty. While the practice is fading, it is still a part of some religious ceremonies, traditional medicine, and recreational gatherings, especially among certain ethnic minorities and people living in rural areas.

Betel nuts have been giving people a buzz for over 4,000 years Read More »

the-case-of-the-coke-snorting-chihuahua

The case of the coke-snorting Chihuahua

Every dog owner knows that canines are natural scavengers and that vigilance is required to ensure they don’t eat toxic substances. But accidental ingestions still happen—like the chihuahua who vets discovered had somehow managed to ingest a significant quantity of cocaine, according to a case study published in the journal Frontiers in Veterinary Science.

There have been several studies investigating the bad effects cocaine can have on the cardiovascular systems of both humans and animals. However, these controlled studies are primarily done in laboratory settings and often don’t match the messier clinical realities. “Case reports are crucial in veterinary medicine by providing real-world examples,” said co-author Jake Johnson of North Carolina State University. “They capture clinical scenarios that larger studies might miss, preserve unusual presentations for future reference, and help build our collective understanding of rare presentations, ultimately improving emergency preparedness and treatment protocols.”

In the case of a male 2-year-old chihuahua, the dog presented as lethargic and unresponsive. His owners had found him with his tongue sticking out and unable to focus visually. The chihuahua was primarily an outdoor dog but was also allowed inside, and all its vaccines were up to date. Examination revealed bradycardia, i.e., a slow heart rate, a blue tinge to the dog’s mucus membranes—often a sign of too much unoxygenated hemoglobin circulating through the system—and dilated pupils. The dog’s symptoms faded after the vet administered a large dose of atropine, followed by epinephrine.

Then the dog was moved to a veterinary teaching hospital for further evaluation and testing. A urine test was positive for cocaine with traces of fentanyl, confirmed with liquid chromatography testing. The authors estimate the dog could have snorted (or ingested) as much as 96 mg of the drug. Apparently the Chihuahua had a history of ingesting things it shouldn’t, but the owners reported no prescription medications missing at home. They also did not have any controlled substances or illegal drugs like cocaine in the home.

The case of the coke-snorting Chihuahua Read More »

how-old-is-the-earliest-trace-of-life-on-earth?

How old is the earliest trace of life on Earth?


A recent conference sees doubts raised about the age of the oldest signs of life.

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

Where the microbe bodies are buried: metamorphosed sediments in Labrador, Canada containing microscopic traces of carbon. Credit: Martin Whitehouse

The question of when life began on Earth is as old as human culture.

“It’s one of these fundamental human questions: When did life appear on Earth?” said Professor Martin Whitehouse of the Swedish Museum of Natural History.

So when some apparently biological carbon was dated to at least 3.95 billion years ago—making it the oldest remains of life on Earth—the claim sparked interest and skepticism in equal measure, as Ars Technica reported in 2017.

Whitehouse was among those skeptics. This July, he presented new evidence to the Goldschmidt Conference in Prague that the carbon in question is only between 2.7–2.8 billion years old, making it younger than other traces of life found elsewhere.

Organic carbon?

The carbon in question is in rock in Labrador, Canada. The rock was originally silt on the seafloor that, it’s argued, hosted early microbial life that was buried by more silt, leaving the carbon as their remains. The pressure and heat of deep burial and tectonic events over eons have transformed the silt into a hard metamorphic rock, and the microbial carbon in it has metamorphosed into graphite.

“They are very tiny, little graphite bits,” said Whitehouse.

The key to showing that this graphite was originally biological versus geological is its carbon isotope ratio. From life’s earliest days, its enzymes have preferred the slightly lighter isotope carbon-12 over the marginally heavier carbon-13. Organic carbon is therefore much richer in carbon-12 than geological carbon, and the Labrador graphite does indeed have this “light” biological isotope signature.

The key question, however, is its true age.

Mixed-up, muddled-up, shook-up rocks

Sorting out the age of the carbon-containing Labrador rock is a geological can of worms.

These are some of the oldest rocks on the planet—they’ve been heated, squished, melted, and faulted multiple times as Earth went through the growth, collision, and breakup of continents before being worn down by ice and exposed today.

“That rock itself is unbelievably complicated,” said Whitehouse. “It’s been through multiple phases of deformation.”

In general, the only ways to date sediments are if there’s a layer of volcanic ash in them, or by distinctive fossils in the sediments. Neither is available in these Labrador rocks.

“The rock itself is not directly dateable,” said Whitehouse, “so then you fall onto the next best thing, which is you want to look for a classic field geology cross-cutting relationship of something that is younger and something that you can date.”

The idea, which is as old as the science of geology itself, is to bracket the age of the sediment by finding a rock formation that cuts across it. Logically, the cross-cutting rock is younger than the sediment it cuts across.

In this case, the carbon-containing metamorphosed siltstone is surrounded by swirly, gray banded gneiss rock, but the boundary between the siltstone and the gray gneiss is parallel, so there’s no cross-cutting to use.

Professor Tsuyoshi Komiya of The University of Tokyo was a coauthor on the 3.95 billion-year age paper. His team used a cross-cutting rock they found at a different location and extrapolated that to the carbon-bearing siltstone to constrain its age. “It was discovered that the gneiss was intruded into supracrustal rocks (mafic and sedimentary rocks),” said Komiya in an email to Ars Technica.

But Whitehouse disputes that inference between the different outcrops.

“You’re reliant upon making these very long-distance assumptions and correlations to try to date something that might actually not have anything to do with what you think you’re dating,” he said.

Professor Jonathan O’Neil of the University of Ottawa, who was not involved in either Whitehouse’s or Komiya’s studies but who has visited the outcrops in question, agrees with Whitehouse. “I remember I was not convinced either by these cross-cutting relationships,” he told Ars. “It’s not clear to me that one is necessarily older than the other.”

With the field geology evidence disputed, the other pillar holding up the 3.95-billion-year-old date is its radiometric date, measured in zircon crystals extracted from the rocks surrounding the metamorphosed siltstone.

The zircon keeps the score

Geologists use the mineral zircon to date rocks because when it crystallizes, it incorporates uranium but not lead. So as radioactive uranium slowly decays into lead, the ratio of uranium to lead provides the age of the crystal.

But the trouble with any date obtained from rocks as complicated as these is knowing exactly what geological event it dates—the number alone means little without the context of all the other geological evidence for the events that affected the area.

Both Whitehouse and O’Neil have independently sampled and dated the same rocks as Komiya’s team, and where Komiya’s team got a date of 3.95, Whitehouse’s and O’Neil’s new dates are both around 3.87 billion years. Importantly, O’Neil’s and Whitehouse’s dates are far more precise, with errors around plus-or-minus 5 or 6 million years, which is remarkably precise for dates in rocks this old. The 3.95 date had an error around 10 times bigger. “It’s a large error,” said O’Neil.

But there’s a more important question: How is that date related to the age of the organic carbon? The rocks have been through many events that could each have “set” the dates in the zircons. That’s because zircons can survive multiple re-heatings and even partial remelting, with each new event adding a new layer, or “zone,” on the outer surface of the crystal, recording the age of that event.

“This rock has seen all the events, and the zircon in it has responded to all of these events in a way that, when you go in with a very small-scale ion beam to do the sampling on these different zones, you can pick apart the geological history,” Whitehouse said.

Whitehouse’s team zapped tiny spots on the zircons with a beam of negatively charged oxygen ions to dislodge ions from the crystals, then sucked away these ions into a mass spectrometer to measure the uranium-lead ratio, and thus the dates. The tiny beam and relatively small error have allowed Whitehouse to document the events that these rocks have been through.

“Having our own zircon means we’ve been able to go in and look in more detail at the internal structure in the zircon,” said Whitehouse. “Where we might have a core that’s 3.87, we’ll have a rim that is 2.7 billion years, and that rim, morphologically, looks like an igneous zircon,” said Whitehouse.

That igneous outer rim of Whitehouse’s zircons shows that it formed in partially molten rock that would have flowed at that time. That flow was probably what brought it next to the carbon-containing sediments. Its date of 2.7 billion years ago means the carbon in the sediments could be any age older than that.

That’s a key difference from Komiya’s work. He argues that the older dates in the cores of the zircons are the true age of the cross-cutting rock. “Even the igneous zircons must have been affected by the tectonothermal event; therefore, the obtained age is the minimum age, and the true age is older,” said Komiya. “The fact that young zircons were found does not negate our research.”

But Whitehouse contends that the old cores of the zircons instead record a time when the original rock formed, long before it became a gneiss and flowed next to the carbon-bearing sediments.

Zombie crystals

Zircon’s resilience means it can survive being eroded from the rock where it formed and then deposited in a new, sedimentary rock as the undead remnants of an older, now-vanished landscape.

The carbon-containing siltstone contains zombie zircons, and Whitehouse presented new data on them to the Goldschmidt Conference, dating them to 2.8 billion years ago. Whitehouse argues that these crystals formed in an igneous rock 2.8 billion years ago and then were eroded, washed into the sea, and settled in the silt. So the siltstone must be no older than 2.8 billion years old, he said.

“You cannot deposit a zircon that is not formed yet,” O’Neil explained.

greyscale image of tiny fragments of mineral, with multiple layers visible in each fragment. A number of sites are circled on each fragment.

Tiny recorders of history – ancient zircon crystals from Labrador. Left shows layers built up as the zircon went through many heating events. Right shows a zircon with a prism-like outer shape showing that it formed in igneous conditions around an earlier zircon. Circles indicate where an ion beam was used to measure dates. Credit: Martin Whitehouse

This 2.8-billion-year age, along with the igneous zircon age of 2.7 billion years, brackets the age of the organic carbon to anywhere between 2.8 and 2.7 billion years old. That’s much younger than Komiya’s date of 3.95 billion years old.

Komiya disagrees: “I think that the estimated age is minimum age because zircons suffered from many thermal events, so that they were rejuvenated,” he said. In other words, the 2.8-billion-year age again reflects later heating, and the true date is given by the oldest-dated zircons in the siltstone.

But Whitehouse presented a third line of evidence to dispute the 3.95-billion-year date: isotopes of hafnium in the same zombie zircon crystals.

The technique relies on radioactive decay of lutetium-176 to hafnium-176. If the 2.8-billion-year age resulted from rejuvenation by later heating, it would have had to have formed from material with a hafnium isotope ratio incompatible with the isotope composition of the early Earth.

“They go to impossible numbers,” said Whitehouse.

The only way that the uranium-lead ratio can be compatible with the hafnium in the zircons, Whitehouse argued, is if the zircons that settled in the silt had crystallized around 2.8 billion years ago, constraining the organic carbon to being no older than that.

The new oldest remains of life on Earth, for now

If the Labrador carbon is no longer the oldest trace of life on Earth, then where are the oldest remains of life now?

For Whitehouse, it’s in the 3.77-billion-year-old Isua Greenstone Belt in Greenland: “I’m willing to believe that’s a well-documented age… that’s what I think is the best evidence for the oldest biogenicity that we have,” said Whitehouse.

O’Neil recently co-authored a paper on Earth’s oldest surviving crustal rocks, located next to Hudson Bay in Canada. He points there. “I would say it’s in the Nuvvuagittuq Greenstone belt,” said O’Neil, “because I would argue that these rocks are 4.3 billion years old. Again, not everybody agrees!” Intriguingly, the rocks he is referring to contain carbon with a possibly biological origin and are thought to be the remains of the kind of undersea vent where life could well have first emerged.

But the bigger picture is the fact that we have credible traces of life of this vintage—be it 3.8 or 3.9 or 4.3 billion years.

Any of those dates is remarkably early in the planet’s 4.6-billion-year life. It’s long before there was an oxygenated atmosphere, before continents emerged above sea level, and before plate tectonics got going. It’s also much older than the oldest microbial “stromatolite” fossils, which have been dated to about 3.48 billion years ago.

O’Neil thinks that once conditions on Earth were habitable, life would have emerged relatively fast: “To me, it’s not shocking, because the conditions were the same,” he said. “The Earth has the luxury of time… but biology is very quick. So if all the conditions were there by 4.3 billion years old, why would biology wait 500 million years to start?”

Photo of Howard Lee

Howard Lee is a freelance science writer focusing on the evolution of planet Earth through deep time. He earned a B.Sc. in geology and M.Sc. in remote sensing, both from the University of London, UK.

How old is the earliest trace of life on Earth? Read More »

new-adhesive-surface-modeled-on-a-remora-works-underwater

New adhesive surface modeled on a remora works underwater


It was tested for its ability to adhere to the inside of the digestive tract.

Most adhesives can’t stick to wet surfaces because water and other fluids disrupt the adhesive’s bonding mechanisms. This problem, though, has been beautifully solved by evolution in remora suckerfish, which use an adhesive disk on top of their heads to attach to animals like dolphins, sharks, and even manta rays.

A team of MIT scientists has now taken a close look at these remora disks and reverse-engineered them. “Basically, we looked at nature for inspiration,” says Giovanni Traverso, a professor at MIT Department of Mechanical Engineering and senior author of the study.

Sticking Variety

Remora adhesive disks are an evolutionary adaptation of the fish’s first dorsal fin, the one that in other species sits on top of the body, just behind the head and gill covers. The disk rests on an intercalary backbone—a bone structure that most likely evolved from parts of the spine. This bony structure supports lamellae, specialized bony plates with tiny backward-facing spikes called spinules. The entire disk is covered with soft tissue compartments that are open at the top. “This makes the remora fish adhere very securely to soft-bodied, fast-moving marine hosts,” Traverso says.

A remora attaches to the host by pressing itself against the skin, which pushes the water out of these compartments, creating a low-pressure zone. Then, the spinules mechanically interlock with the host’s surface, making the whole thing work a bit like a combination of a suction cup and Velcro. When the fish wants to detach from a host, it lifts the disk, letting water back into the compartments to remove the suction. Once released, it can simply swim away.

What impressed the scientists the most, though, was the versatility of those disks. Reef-associated species of remora like Phtheirichthys lineatus are generalists and stick to various hosts, including other fish, sharks, or turtles. Other species living in the open sea are more specialized and attach to cetaceans, swordfish, or marlins. While most remoras attach to the external tissue of their hosts, R. albescens sticks within the oral cavities and gill chamber of manta rays.

a close up of a fish, showing its head covered by an oval-shaped pad that has lots of transverse ridges.

A close-up of the adhesive pad of a remora. Credit: Stephen Frink

To learn what makes all these different disks so good at sticking underwater, the team first examined their anatomy in detail. It turned out that the difference between the disks was mostly in the positioning of lamellae. Generalist species have a mix of parallel and angled lamellae, while remoras sticking to fast-swimming hosts have them mostly parallel. R. albescens, on the other hand, doesn’t have a dominant lamellae orientation pattern but has them positioned at a very wide variety of angles.

The researchers wanted to make an adhesive device that would work for a wide range of applications, including maritime exploration or underwater manufacturing. Their initial goal, though, was designing a drug delivery platform that could reliably stick to the inside walls of the gastrointestinal tract. So, they chose R. albescens disks as their starting point, since that species already attaches internally to its host. They termed their device an Mechanical Underwater Soft Adhesion System (MUSAS).

However, they didn’t just opt for a biomimetic, copy-and-paste design. “There were things we did differently,” Traverso says.

Upgrading nature

The first key difference was deployment. MUSAS was supposed to travel down the GI tract to reach its destination, so the first challenge was making it fit into a pill. The team chose the size 000 capsule, which at 26 millimeters in length and 9.5 millimeters in diameter, is the largest Food and Drug Administration-approved ingestible form. MUSAS had a supporting structure—just like remora disks, but made with stainless steel. The angled lamellae with spinules fashioned after those on R. albescens were made of a shape memory nickel-titanium alloy. The role of remora’s soft tissues, which provide the suction by dividing the disk into compartments, was played by an elastomer.

MUSAS, would be swallowed in a folded form within its huge pill. “The capsule is tuned to dissolve in specific pH environment, which is how we determine the target location—for example the small intestine has a slightly different pH than the stomach”, says Ziliang Kang, an MIT researcher in Traverso’s group and lead author of the study.  Once released, the shape memory alloy in MUSAS lamellae-like structures would unfold in response to body temperature and the whole thing would stick to the wall of the target organ, be it the esophagus, the stomach, or the intestines.

The mechanism of sticking was also a bit different from that of remoras. “The fish can swim and actively press itself against the surface it wants to stick to. MUSAS can’t do that, so instead we relied on the peristaltic movements within the GI tract to exert the necessary force,” Traverso explains. When the muscles contract, MUSAS would be pressed against the wall and attach to it. And it was expected to stay there for quite some time.

The team ran a series of experiments to evaluate MUSAS performance in a few different scenarios. The drug-delivery platform application was tested on pig organ samples. MUSAS stayed in the sample GI tract for an average of nine days, with the longest sticking time reaching three and a half weeks. MUSAS managed to stay in place despite food and fluids going through the samples.

Even when the team poked the devices with a pipette to test what they called “resisting dynamic interference,” MUSAS just slid a little but remained firmly attached. Other experiments included using MUSAS to attach temperature sensors to external tissues of live fish and putting sensors that could detect reflux events in the GI tract of live pigs.

Branching out

The team is working on making MUSAS compatible with a wider range of drugs and mRNA vaccines. “We also think about using this for stimulating tissues,” Traverso says. The solution he has in mind would use MUSAS to deliver electrical pulses to the walls of the GI tract, which Traverso’s lab has shown can activate appetite-regulating hormones. But the team also wants to go beyond strictly medical applications.

The team demonstrated that MUSAS is really strong as an adhesive. When it sticks to a surface, it can hold a weight over a thousand times greater than its own. This puts MUSAS more or less on par with some of the best adhesives we have, such as polyurethane glues or epoxy resins. What’s more, this sticking strength was measured when MUSAS was attached to soft, uneven, wet surfaces. “On a rigid, even surface, the force-to-weight ratio should be even higher,” Kang claims. And this, Kang thinks, makes scaled-up variants of MUSAS a good match for underwater manufacturing.

“The first scenario I see is using MUSAS as grippers attached to robotic arms moving around soft objects,” Kang explains. Currently, this is done using vacuum systems that simply suck onto a fabric or other surface. The problem is that these solutions are rather complex and heavy. Scaled-up MUSAS should be able to achieve the same thing passively, cutting cost and weight. The second idea Kang has is using MUSAS in robots designed to perform maintenance jobs beneath the waterline on boats or ships. “We are really trying to see what is possible,” Traverso says.

Nature, 2025.  DOI: 10.1038/s41586-025-09304-4

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

New adhesive surface modeled on a remora works underwater Read More »

for-giant-carnivorous-dinosaurs,-big-size-didn’t-mean-a-big-bite

For giant carnivorous dinosaurs, big size didn’t mean a big bite

“And then you have the Spinosaurus which was kind of weird in general,” Rowe says.  “There was a study by Dave Hone and Tom Holtz about how it was waiting on the shorelines, waiting for food to go by that it could fish out.” But Spinosaurus’ foraging wasn’t limited to fishing. There was a pterosaur found preserved in its stomach and there were iguanodon remains found in the maw of a Baryonyx, another large carnivore belonging to the same lineage as the Spinosaurus. “They had great diversity in their diet. They were generalists, but our results show they weren’t these massive bone-crunching predators like the T. rex,” Rowe says. Because the T. rex was just built different.

King of the Cretaceous jungle

The Tyranosauroidea lineage had stiff, akinetic skulls, meaning they had very little mobility in the joints. The T. rex skull could and most likely did withstand very high stress as the animal pursued a “high stress, high power” strategy, entirely different from other large carnivores. “They were very much like big crocodiles with extremely strong, reinforced jaws and powerful muscles that could pulverize bones,” Rowe claims.

The T. rex, he argued, was a specialist—an ambush predator that attacked large, highly mobile prey, aiming to subdue it with a single bite. “And we have fossil evidence of that,” Rowe says. “In the Museum of Natural History in New York, there is a Hadrosaur, a large herbivorous dinosaur with a duck-like beak, and there’s a T. rex tooth embedded in its back.” This, he thinks, means the T. rex was actively preying on this animal, especially since there are healing marks around the stuck tooth. “Even with this super strong bite, the T. rex wasn’t always successful,” Rowe adds.

Still, the fight with the Spinosaurus most likely wouldn’t go the way it did in Jurassic Park III. “The T. rex was built to fight like that; the Spinosaurus really wasn’t”, Rowe says.

Current Biology, 2025.  DOI: 10.1016/j.cub.2025.06.051

For giant carnivorous dinosaurs, big size didn’t mean a big bite Read More »

some-ai-tools-don’t-understand-biology-yet

Some AI tools don’t understand biology yet


A collection of new studies on gene activity shows that AI tools aren’t very good.

Gene activity appears to remain beyond the abilities of AI at the moment. Credit: BSIP

Biology is an area of science where AI and machine-learning approaches have seen some spectacular successes, such as designing enzymes to digest plastics and proteins to block snake venom. But in an era of seemingly endless AI hype, it might be easy to think that we could just set AI loose on the mounds of data we’ve already generated and end up with a good understanding of most areas of biology, allowing us to skip a lot of messy experiments and the unpleasantness of research on animals.

But biology involves a whole lot more than just protein structures. And it’s extremely premature to suggest that AI can be equally effective at handling all aspects of biology. So we were intrigued to see a study comparing a set of AI software packages designed to predict how active genes will be in cells exposed to different conditions. As it turns out, the AI systems couldn’t manage to do any better than a deliberately simplified method of predicting.

The results serve as a useful caution that biology is incredibly complex, and developing AI systems that work for one aspect of it is not an indication that they can work for biology generally.

AI and gene activity

The study was conducted by a trio of researchers based in Heidelberg: Constantin Ahlmann-Eltze, Wolfgang Huber, and Simon Anders. They note that a handful of additional studies have been released while their work was on a pre-print server, all of them coming to roughly the same conclusions. But these authors’ approach is pretty easy to understand, so we’ll use it as an example.

The AI software they examined attempts to predict changes in gene activity. While every cell carries copies of the roughly 20,000 genes in the human genome, not all of them are active in a given cell—”active” in this case meaning they are producing messenger RNAs. Some provide an essential function and are active at high levels at all times. Others are only active in specific cell types, like nerves or skin. Still others are activated under specific conditions, like low oxygen or high temperatures.

Over the years, we’ve done many studies examining the activity of every gene in a given cell type under different conditions. These studies can range from using gene chips to determine which messenger RNAs are present in a population of cells to sequencing the RNAs isolated from single cells and using that data to identify which genes are active. But collectively, they can provide a broad, if incomplete, picture that links the activity of genes with different biological circumstances. It’s a picture you could potentially use to train an AI that would make predictions about gene activity under conditions that haven’t been tested.

Ahlmann-Eltze, Huber, and Anders tested a set of what are called single-cell foundation models that have been trained on this sort of gene activity data. The “single cell” portion indicates that these models have been trained on gene activity obtained from individual cells rather than a population average of a cell type. Foundation models mean that they have been trained on a broad range of data but will require additional training before they’re deployed for a specific task.

Underwhelming performance

The task in this case is predicting how gene activity might change when genes are altered. When an individual gene is lost or activated, it’s possible that the only messenger RNA that is altered is the one made by that gene. But some genes encode proteins that regulate a collection of other genes, in which case you might see changes in the activity of dozens of genes. In other cases, the loss or activation of a gene could affect a cell’s metabolism, resulting in widespread alterations of gene activity.

Things get even more complicated when two genes are involved. In many cases, the genes will do unrelated things, and you get a simple additive effect: the changes caused by the loss of one, plus the changes caused by the loss of others. But if there’s some overlap between the functions, you can get an enhancement of some changes, suppression of others, and other unexpected changes.

To start exploring these effects, researchers have intentionally altered the activity of one or more genes using the CRISPR DNA editing technology, then sequenced every RNA in the cell afterward to see what sorts of changes took place. This approach (termed Perturb-seq) is useful because it can give us a sense of what the altered gene does in a cell. But for Ahlmann-Eltze, Huber, and Anders, it provides the data they need to determine if these foundation models can be trained to predict the ensuing changes in the activity of other genes.

Starting with the foundation models, the researchers conducted additional training using data from an experiment where either one or two genes were activated using CRISPR. This training used the data from 100 individual gene activations and another 62 where two genes were activated. Then, the AI packages were asked to predict the results for another 62 pairs of genes that were activated. For comparison, the researchers also made predictions using two extremely simple models: one that always predicted that nothing would change and a second that always predicted an additive effect (meaning that activating genes A and B would produce the changes caused by activating A plus the changes caused by activating B).

They didn’t work. “All models had a prediction error substantially higher than the additive baseline,” the researchers concluded. The result held when the researchers used alternative measurements of the accuracy of the AI’s predictions.

The gist of the problem seemed to be that the trained foundation models weren’t very good at predicting when the alterations of pairs of genes would produce complex patterns of changes—when the alteration of one gene synergized with the alteration of a second. “The deep learning models rarely predicted synergistic interactions, and it was even rarer that those predictions were correct,” the researchers concluded. In a separate test that looked specifically at these synergies between genes, it turned out that none of the models were better than the simplified system that always predicted no changes.

Not there yet

The overall conclusions from the work are pretty clear. “As our deliberately simple baselines are incapable of representing realistic biological complexity yet were not outperformed by the foundation models,” the researchers write, “we conclude that the latter’s goal of providing a generalizable representation of cellular states and predicting the outcome of not-yet-performed experiments is still elusive.”

It’s important to emphasize that “still elusive” doesn’t mean we’re incapable of ever developing an AI that can help with this problem. It also doesn’t mean that this applies to all cellular states (the results are specific to gene activity), much less all of biology. At the same time, the work provides a valuable caution at a time when there’s a lot of enthusiasm for the idea that AI’s success in a couple of areas means we’re on the cusp of a world where it can be applied to anything.

Nature Methods, 2025. DOI: 10.1038/s41592-025-02772-6  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Some AI tools don’t understand biology yet Read More »

figuring-out-why-a-nap-might-help-people-see-things-in-new-ways

Figuring out why a nap might help people see things in new ways


An EEG signal of sleep is associated with better performance on a mental task.

The guy in the back may be doing a more useful activity. Credit: XAVIER GALIANA

Dmitri Mendeleev famously saw the complete arrangement of the periodic table after falling asleep on his desk. He claimed in his dream he saw a table where all the elements fell into place, and he wrote it all down when he woke up. By having a eureka moment right after a nap, he joined a club full of rather talented people: Mary Shelley, Thomas Edison, and Salvador Dali.

To figure out if there’s a grain of truth to all these anecdotes, a team of German scientists at the Hamburg University, led by cognitive science researcher Anika T. Löwe, conducted an experiment designed to trigger such nap-following strokes of genius—and catch them in the act with EEG brain monitoring gear. And they kind of succeeded.

Catching Edison’s cup

“Thomas Edison had this technique where he held a cup or something like that when he was napping in his chair,” says Nicolas Schuck, a professor of cognitive science at the Hamburg University and senior author of the study. “When he fell asleep too deeply, the cup falling from his hand would wake him up—he was convinced that was the way to trigger these eureka moments.” While dozing off in a chair with a book or a cup doesn’t seem particularly radical, a number of cognitive scientists got serious about re-creating Edison’s approach to insights and testing it in their experiments.

One of the recent such studies was done at Sorbonne University by Célia Lacaux, a cognitive neuroscientist, and her colleagues. Over 100 participants were presented with a mathematical problem and told it could be solved by applying two simple rules in a stepwise manner. However, there was also an undescribed shortcut that made reaching the solution much quicker. The goal was to see if participants would figure this shortcut out after an Edison-style nap. The scientists would check whether the eureka moment would show in EEG.

Lacaux’s team also experimented with different objects the participants should hold while napping: spoons, steel spheres, stress balls, etc. It turned out Edison was right, and a cup was by far the best choice. It also turned out that most participants recognized there was a hidden rule after the falling cup woke them up. The nap was brief, only long enough to enter the light, non-REM N1 phase of sleep.

Initially, Schuck’s team wanted to replicate the results of Lacaux’s study. They even bought the exact same make of cups, but the cups failed this time. “For us, it just didn’t work. People who fell asleep often didn’t drop these cups—I don’t know why,” Schuck says.

The bigger surprise, however, was that the N1 phase sleep didn’t work either.

Tracking the dots

Schuck’s team set up an experiment that involved asking 90 participants to track dots on a screen in a series of trials, with a 20-minute-long nap in between. The dots were rather small, colored either purple or orange, placed in a circle, and they moved in one of two directions. The task for the participants was to determine the direction the dots were moving. That could range from easy to really hard, depending on the amount of jitter the team introduced.

The insight the participants could discover was hidden in the color coding. After a few trials where the dots’ direction was random, the team introduced a change that tied the movement to the color: orange dots always moved in one direction, and the purple dots moved in the other. It was up to the participants to figure this out, either while awake or through a nap-induced insight.

Those dots were the first difference between Schuck’s experiment and the Sorbonne study. Lacaux had her participants cracking a mathematical problem that relied on analytical skills. Schuck’s task was more about perceptiveness and out-of-the-box thinking.

The second difference was that the cups failed to drop and wake participants up. Muscles usually relax more when sleep gets deeper, which is why most people drop whatever they’re holding either at the end of the N1 phase or at the onset of the N2 phase, when the body starts to lose voluntary motor control. “We didn’t really prevent people from reaching the N2 phase, and it turned out the participants who reached the N2 phase had eureka moments most often,” Schuck explains.

Over 80 percent of people who reached the deeper, N2 phase of sleep found the color-coding solution. Participants who fell into a light N1 sleep had a 61 percent success rate; that dropped to just 55 percent in a group that stayed awake during their 20-minute nap time. In a control group that did the same task without a nap break, only 49 percent of participants figured out the hidden trick.

The divergent results in Lacaux’s and Schuck’s experiments were puzzling, so the team looked at the EEG readouts, searching for features in the data that could predict eureka moments better than sleep phases alone. And they found something.

The slope of genius

The EEG signal in the human brain consists of low and high frequencies that can be plotted on a spectral slope. When we are awake, there are a lot of high-frequency signals, and this slope looks rather flat. During sleep, these high frequencies get muted, there are more low-frequency signals, and the slope gets steeper. Usually, the deeper we sleep, the steeper our EEG slope is.

The team noticed that eureka moments seemed to be highly correlated with a steep EEG spectral slope—the steeper the slope, the more likely people were to get a breakthrough. In fact, the models based on the EEG signal alone predicted eureka moments better than predictions made based on sleep phases and even based on the sleep phases and EEG readouts combined.

“Traditionally, people divided sleep EEG readouts down into discrete stages like N1 or N2, but as usual in biology, things in reality are not as discrete,” Schuck says. “They’re much more continuous, there’s kind of a gray zone.” He told Ars that looking specifically at the EEG trace may help us better understand what exactly happens in the brain when a sudden moments of insight arrives.

But Shuck wants to get even more data in the future. “We’re currently running a study that’s been years in the making: We want to use both EEG and [functional magnetic resonance imaging] at the same time to see what happens in the brain when people are sleeping,” Schuck says. The addition of the fMRI imaging will enable Schuck and his colleagues to see which areas of the brain get activated during sleep. What the team wants to learn from combining EEG and fMRI imagery is how sleep boosts memory consolidation.

“We also hope to get some insights, no pun intended, into the processes that play a role in generating insights,” Schuck adds.

PLOS Biology, 2025.  DOI: 10.1371/journal.pbio.3003185

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Figuring out why a nap might help people see things in new ways Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »