Biology

figuring-out-why-a-nap-might-help-people-see-things-in-new-ways

Figuring out why a nap might help people see things in new ways


An EEG signal of sleep is associated with better performance on a mental task.

The guy in the back may be doing a more useful activity. Credit: XAVIER GALIANA

Dmitri Mendeleev famously saw the complete arrangement of the periodic table after falling asleep on his desk. He claimed in his dream he saw a table where all the elements fell into place, and he wrote it all down when he woke up. By having a eureka moment right after a nap, he joined a club full of rather talented people: Mary Shelley, Thomas Edison, and Salvador Dali.

To figure out if there’s a grain of truth to all these anecdotes, a team of German scientists at the Hamburg University, led by cognitive science researcher Anika T. Löwe, conducted an experiment designed to trigger such nap-following strokes of genius—and catch them in the act with EEG brain monitoring gear. And they kind of succeeded.

Catching Edison’s cup

“Thomas Edison had this technique where he held a cup or something like that when he was napping in his chair,” says Nicolas Schuck, a professor of cognitive science at the Hamburg University and senior author of the study. “When he fell asleep too deeply, the cup falling from his hand would wake him up—he was convinced that was the way to trigger these eureka moments.” While dozing off in a chair with a book or a cup doesn’t seem particularly radical, a number of cognitive scientists got serious about re-creating Edison’s approach to insights and testing it in their experiments.

One of the recent such studies was done at Sorbonne University by Célia Lacaux, a cognitive neuroscientist, and her colleagues. Over 100 participants were presented with a mathematical problem and told it could be solved by applying two simple rules in a stepwise manner. However, there was also an undescribed shortcut that made reaching the solution much quicker. The goal was to see if participants would figure this shortcut out after an Edison-style nap. The scientists would check whether the eureka moment would show in EEG.

Lacaux’s team also experimented with different objects the participants should hold while napping: spoons, steel spheres, stress balls, etc. It turned out Edison was right, and a cup was by far the best choice. It also turned out that most participants recognized there was a hidden rule after the falling cup woke them up. The nap was brief, only long enough to enter the light, non-REM N1 phase of sleep.

Initially, Schuck’s team wanted to replicate the results of Lacaux’s study. They even bought the exact same make of cups, but the cups failed this time. “For us, it just didn’t work. People who fell asleep often didn’t drop these cups—I don’t know why,” Schuck says.

The bigger surprise, however, was that the N1 phase sleep didn’t work either.

Tracking the dots

Schuck’s team set up an experiment that involved asking 90 participants to track dots on a screen in a series of trials, with a 20-minute-long nap in between. The dots were rather small, colored either purple or orange, placed in a circle, and they moved in one of two directions. The task for the participants was to determine the direction the dots were moving. That could range from easy to really hard, depending on the amount of jitter the team introduced.

The insight the participants could discover was hidden in the color coding. After a few trials where the dots’ direction was random, the team introduced a change that tied the movement to the color: orange dots always moved in one direction, and the purple dots moved in the other. It was up to the participants to figure this out, either while awake or through a nap-induced insight.

Those dots were the first difference between Schuck’s experiment and the Sorbonne study. Lacaux had her participants cracking a mathematical problem that relied on analytical skills. Schuck’s task was more about perceptiveness and out-of-the-box thinking.

The second difference was that the cups failed to drop and wake participants up. Muscles usually relax more when sleep gets deeper, which is why most people drop whatever they’re holding either at the end of the N1 phase or at the onset of the N2 phase, when the body starts to lose voluntary motor control. “We didn’t really prevent people from reaching the N2 phase, and it turned out the participants who reached the N2 phase had eureka moments most often,” Schuck explains.

Over 80 percent of people who reached the deeper, N2 phase of sleep found the color-coding solution. Participants who fell into a light N1 sleep had a 61 percent success rate; that dropped to just 55 percent in a group that stayed awake during their 20-minute nap time. In a control group that did the same task without a nap break, only 49 percent of participants figured out the hidden trick.

The divergent results in Lacaux’s and Schuck’s experiments were puzzling, so the team looked at the EEG readouts, searching for features in the data that could predict eureka moments better than sleep phases alone. And they found something.

The slope of genius

The EEG signal in the human brain consists of low and high frequencies that can be plotted on a spectral slope. When we are awake, there are a lot of high-frequency signals, and this slope looks rather flat. During sleep, these high frequencies get muted, there are more low-frequency signals, and the slope gets steeper. Usually, the deeper we sleep, the steeper our EEG slope is.

The team noticed that eureka moments seemed to be highly correlated with a steep EEG spectral slope—the steeper the slope, the more likely people were to get a breakthrough. In fact, the models based on the EEG signal alone predicted eureka moments better than predictions made based on sleep phases and even based on the sleep phases and EEG readouts combined.

“Traditionally, people divided sleep EEG readouts down into discrete stages like N1 or N2, but as usual in biology, things in reality are not as discrete,” Schuck says. “They’re much more continuous, there’s kind of a gray zone.” He told Ars that looking specifically at the EEG trace may help us better understand what exactly happens in the brain when a sudden moments of insight arrives.

But Shuck wants to get even more data in the future. “We’re currently running a study that’s been years in the making: We want to use both EEG and [functional magnetic resonance imaging] at the same time to see what happens in the brain when people are sleeping,” Schuck says. The addition of the fMRI imaging will enable Schuck and his colleagues to see which areas of the brain get activated during sleep. What the team wants to learn from combining EEG and fMRI imagery is how sleep boosts memory consolidation.

“We also hope to get some insights, no pun intended, into the processes that play a role in generating insights,” Schuck adds.

PLOS Biology, 2025.  DOI: 10.1371/journal.pbio.3003185

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Figuring out why a nap might help people see things in new ways Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »

robotic-sucker-can-adapt-to-surroundings-like-an-actual-octopus

Robotic sucker can adapt to surroundings like an actual octopus

This isn’t the first time suction cups were inspired by highly adaptive octopus suckers. Some models have used pressurized chambers meant to push against a surface and conform to it. Others have focused more on matching the morphology of a biological sucker. This has included giving the suckers microdenticles, the tiny tooth-like projections on octopus suckers that give them a stronger grip.

Previous methods of artificial conformation have had some success, but they could be prone to leakage from gaps between the sucker and the surface it is trying to stick to, and they often needed vacuum pumps to operate. Yue and his team created a sucker that was morphologically and mechanically similar to that of an octopus.

Suckers are muscular structures with an extreme flexibility that helps them conform to objects without leakage, contract when gripping objects, and release tension when letting them go. This inspired the researchers to create suckers from a silicone sponge material on the inside and a soft silicone pad on the outside.

For the ultimate biomimicry, Yue thought that the answer to the problems experienced with previous models was to come up with a sucker that simulated the mucus secretion of octopus suckers.

This really sucks

Cephalopod suction was previously thought to be a product of these creatures’ soft, flexible bodies, which can deform easily to adapt to whatever surface it needs to grip. Mucus secretion was mostly overlooked until Yue decided to incorporate it into his robo-suckers.

Mollusk mucus is known to be five times more viscous than water. For Yue’s suckers, an artificial fluidic system, designed to mimic the secretions released by glands on a biological sucker, creates a liquid seal between the sucker and the surface it is adhering to, just about eliminating gaps. It might not have the strength of octopus slime, but water is the next best option for a robot that is going to be immersed in water when it goes exploring, possibly in underwater caves or at the bottom of the ocean.

Robotic sucker can adapt to surroundings like an actual octopus Read More »

changing-one-gene-can-restore-some-tissue-regeneration-to-mice

Changing one gene can restore some tissue regeneration to mice

Regeneration is a trick many animals, including lizards, starfish, and octopuses, have mastered. Axolotls, a salamander species originating in Mexico, can regrow pretty much everything from severed limbs, to eyes and parts of brain, to the spinal cord. Mammals, though, have mostly lost this ability somewhere along their evolutionary path. Regeneration persisted, in a limited number of tissues, in just a few mammalian species like rabbits or goats.

“We were trying to learn how certain animals lost their regeneration capacity during evolution and then put back the responsible gene or pathway to reactivate the regeneration program,” says Wei Wang, a researcher at the National Institute of Biological Sciences in Beijing. Wang’s team has found one of those inactive regeneration genes, activated it, and brought back a limited regeneration ability to mice that did not have it before.

Of mice and bunnies

The idea Wang and his colleagues had was a comparative study of how the wound healing process works in regenerating and non-regenerating mammalian species. They chose rabbits as their regenerating mammals and mice as the non-regenerating species. As the reference organ, the team picked the ear pinna. “We wanted a relatively simple structure that was easy to observe and yet composed of many different cell types,” Wang says. The test involved punching holes in the ear pinna of rabbits and mice and tracking the wound-repairing process.

The healing process began in the same way in rabbits and mice. Within the first few days after the injury, a blastema—a mass of heterogeneous cells—formed at the wound site. “Both rabbits and mice will heal the wounds after a few days,” Wang explains. “But between the 10th and 15th day, you will see the major difference.” In this timeframe, the earhole in rabbits started to become smaller. There were outgrowths above the blastema—the animals were producing more tissue. In mice, on the other hand, the healing process halted completely, leaving a hole in the ear.

Changing one gene can restore some tissue regeneration to mice Read More »

researchers-get-viable-mice-by-editing-dna-from-two-sperm

Researchers get viable mice by editing DNA from two sperm


Altering chemical modifications of DNA lets the DNA from two sperm make a mouse.

For many species, producing an embryo is a bit of a contest between males and females. Males want as many offspring as possible and want the females to devote as many resources as possible to each of them. Females do better by keeping their options open and distributing resources in a way to maximize the number of offspring they can produce over the course of their lives.

In mammals, this plays out through the chemical modification of DNA, a process called imprinting. Males imprint their DNA by adding methyl modifications to it in a way that alters the activity of genes in order to promote the growth of embryos. Females do similar things chemically but focus on shutting down genes that promote embryonic growth. In a handful of key regions of the genome, having only the modifications specific to one sex is lethal, as the embryo can’t grow to match its stage of development.

One consequence of this is that you normally can’t produce embryos using only the DNA from eggs or from sperm. But over the last few years, researchers have gradually worked around the need for imprinted sites to have one copy from each parent. Now, in a very sophisticated demonstration, researchers have used targeted editing of methylation to produce mice from the DNA of two sperm.

Imprinting and same-sex parents

There’s a long history of studying imprinting in mice. Long before the genome was sequenced, people had identified specific parts of the chromosomes that, if deleted, were lethal—but only if inherited from one of the two sexes. They correctly inferred that this meant that the genes in the region are normally inactivated in the germ cells of one of the sexes. If they’re deleted in the other sex, then the combination that results in the offspring—missing on one chromosome, inactivated in the other—is lethal.

Over time, seven critical imprinted regions were identified, scattered throughout the genome. And, roughly 20 years ago, a team managed to find the right deletion to enable a female mouse to give birth to offspring that received a set of chromosomes from each of two unfertilized eggs. The researchers drew parallels to animals that can reproduce through parthenogenesis, where the female gives birth using unfertilized eggs. But the mouse example obviously took a big assist via the manipulation of egg cells in culture before being implanted in a mouse.

By 2016, researchers were specifically editing in deletions of imprinted genes in order to allow the creation of embryos by fusing stem cell lines that only had a single set of chromosomes. This was far more focused than the original experiment, as the deletions were smaller and affected only a few genes. By 2018, they had expanded the repertoire by figuring out how to get the genomes of two sperm together in an unfertilized egg with its own genome eliminated.

The products of two male parents, however, died the day after birth. This is either due to improperly compensating for imprinting or simply because the deletions had additional impacts on the embryo’s health. It took until earlier this year, when a very specific combination of 20 different gene edits and deletions enabled mice generated using the chromosomes from two sperm cells to survive to adulthood.

The problem with all of these efforts is that the deletions may have health impacts on the animals and may still cause problems if inherited from the opposite sex. So, while it’s an interesting way to confirm our understanding of the role of imprinting in reproduction, it’s not necessarily the route to using this as a reliable reproductive tool. Which finally brings us to the present research.

Roll your own imprinting

Left out of the above is the nature of the imprinting itself: How does a chunk of chromosome and all the genes on it get marked as coming from a male or female? The secret is to chemically modify that region of the DNA in a way that doesn’t alter base pairing, but does allow it to be recognized as distinct by proteins. The most common way of doing this is to link a single carbon atom (a methyl group) to the base cytosine. This tends to shut nearby genes down, and it can be inherited through cell division, since there are enzymes that recognize when one of the two DNA strands is unmodified and adds a methyl to it.

Methylation turns out to explain imprinting. The key regions for imprinting are methylated differently in males and females, which influences nearby gene activity and can be maintained throughout all of embryonic development.

So, to make up for the imprinting problems caused when both sets of chromosomes come from the same sex, what you need to do is a targeted reprogramming of methylation. And that’s what the researchers behind the new paper have done.

First, they needed to tell the two sets of chromosomes apart. To do that, they used two distantly related strains of mice, one standard lab strain that originated in Europe and a second that was caught in the wild in Thailand less than a century ago. These two strains have been separated for long enough that they have a lot of small differences in DNA sequences scattered throughout the genome. So, it was possible to use these to target one or the other of the genomes.

This was done using parts of the DNA editing systems that have been developed, the most famous of which is CRISPR/CAS. These systems have a protein that pairs with an RNA sequence to find a matching sequence in DNA. In this case, those RNAs could be made so that they target imprinting regions in just one of the two mouse strains. The protein/RNA combinations could also be linked to enzymes that modify DNA, either adding methyls or removing them.

To bring all this together, the researchers started with an egg and deleted the genome from it. They then injected the heads of sperm, one from the lab strain, one from the recently wild mouse. This left them with an egg with two sets of chromosomes, although a quarter of them would have two Y chromosomes and thus be inviable (unlike the Y, the X has essential genes). Arbitrarily, they chose one set of chromosomes to be female and targeted methylation and de-methylation enzymes to it in order to reprogram the pattern of methylation on it. Once that was done, they could allow the egg to start dividing and implant it into female mice.

Rare success

The researchers spent time ensuring that the enzymes they had were modifying the methylation as expected and that development started as usual. Their general finding is that the enzymes did change the methylation state for about 500 bases on either side of the targeted site and did so pretty consistently. But there are seven different imprinting sites that need to be modified, each of which controls multiple nearby genes. So, while the modifications were consistent, they weren’t always thorough enough to result in the expected changes to all of the nearby genes.

This limited efficiency showed up in the rate of survival. Starting with over 250 reprogrammed embryos that carried DNA from two males, they ended up with 16 pregnancies, but only four that died at birth, and three live ones; based on other experiments, most of the rest died during the second half of embryonic development. Of the three live ones, one was nearly 40 percent larger than the typical pup, suggesting problems regulating growth—it died the day after birth.

All three live births were male, although the numbers are small enough that it’s impossible to tell if that’s significant or not.

The researchers suggest several potential reasons for the low efficiency. One is simply that, while the probability of properly reprogramming at least one of the sites is high, reprogramming all seven is considerably more challenging. There’s also the risk of off-target effects, where the modification takes place in locations with similar sequences to the ones targeted. They also concede that there could be other key imprinted regions that we simply haven’t identified yet.

We would need to sort that out if we want to use this approach as a tool, which might be potentially useful as a way to breed mice that carry mutations that affect female viability or fertility. But this work has already been useful even in its inefficient state, because it serves as a pretty definitive validation of our ideas about the function of imprinting in embryonic development, as well as the critical role methylation plays in this process. If we weren’t largely right about both of those, the efficiency of this approach wouldn’t be low—it would be zero.

PNAS, 2025. DOI: 10.1073/pnas.2425307122  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Researchers get viable mice by editing DNA from two sperm Read More »

new-body-size-database-for-marine-animals-is-a-“library-of-life”

New body size database for marine animals is a “library of life”

The ocean runs on size

McClain officially launched MOBS as a passion project while on sabbatical in 2022 but he had been informally collecting data on body size for various marine groups for several years before that. So he had a small set of data already to kick off the project, incorporating it all into a single large database with a consistent set format and style.

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans

Craig McClain holding a giant isopod (Bathynomus giganteus), one of the deep sea’s most iconic crustaceans Credit: Craig McClain

“One of the things that had prevented me from doing this before was the taxonomy issue,” said McClain. “Say you wanted to get the body size for all [species] of octopuses. That was not something that was very well known unless some taxonomist happened to publish [that data]. And that data was likely not up-to-date because new species are [constantly] being described.”

However, in the last five to ten years, the World Register of Marine Species (WoRMS) was established with the objective of cataloging all marine life, with taxonomy experts assigned to specific groups to determine valid new species, which are then added to the data set with a specific numerical code. McClain tied his own dataset to that same code, making it quite easy to update MOBS as new species are added to WoRMS. McClain and his team were also able to gather body size data from various museum collections.

The MOBS database focuses on body length (a linear measurement) as opposed to body mass. “Almost every taxonomic description of a new species has some sort of linear measurement,” said McClain. “For most organisms, it’s a length, maybe a width, and if you’re really lucky you might get a height. It’s very rare for anything to be weighed unless it’s an objective of the study. So that data simply doesn’t exist.”

While all mammals generally have similar density, “If you compare the density of a sea slug, a nudibranch, versus a jellyfish, even though they have the same masses, their carbon contents are much different,” he said. “And a one-meter worm that’s a cylinder and a one-meter sea urchin that’s a sphere are fundamentally different weights and different kinds of organisms.” One solution for the latter is to convert to volume to account for shape differences. Length-to-weight ratios can also differ substantially for different marine animal groups. That’s why McClain hopes to compile a separate database for length-to-weight conversions.

New body size database for marine animals is a “library of life” Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

scientists-figure-out-how-the-brain-forms-emotional-connections

Scientists figure out how the brain forms emotional connections

Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions.

But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances.

Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done.

Emotional complexity

“Decades of work has revealed how simple forms of emotional learning occurs—how sensory stimuli are paired with aversive events,” says Joshua Johansen, a team director at the Neural Circuitry of Learning and Memory at RIKEN Center for Brain Science in Tokyo. But Johansen says that these decades didn’t bring much progress in treating psychiatric conditions like anxiety and trauma-related disorders. “We thought if we could get a handle of more complex emotional processes and understand their mechanisms, we may be able to provide relief for patients with conditions like that,” Johansen claims.

To make it happen, his team performed experiments designed to trigger complex emotional processes in rats while closely monitoring their brains.

Johansen and Xiaowei Gu, his co-author and colleague at RIKEN, started by dividing the rats into two groups. The first “paired” group of rats was conditioned to associate an image with a sound. The second “unpaired” group watched the same image and listened to the same sound, but not at the same time. This prevented the rats from making an association.

Scientists figure out how the brain forms emotional connections Read More »

carnivorous-crocodile-like-monsters-used-to-terrorize-the-caribbean

Carnivorous crocodile-like monsters used to terrorize the Caribbean

How did reptilian things that looked something like crocodiles get to the Caribbean islands from South America millions of years ago? They probably walked.

The existence of any prehistoric apex predators in the islands of the Caribbean used to be doubted. While their absence would have probably made it even more of a paradise for prey animals, fossils unearthed in Cuba, Puerto Rico, and the Dominican Republic have revealed that these islands were crawling with monster crocodyliform species called sebecids, ancient relatives of crocodiles.

While sebecids first emerged during the Cretaceous, this is the first evidence of them lurking outside South America during the Cenozoic epoch, which began 66 million years ago. An international team of researchers has found that these creatures would stalk and hunt in the Caribbean islands millions of years after similar predators went extinct on the South American mainland. Lower sea levels back then could have exposed enough land to walk across.

“Adaptations to a terrestrial lifestyle documented for sebecids and the chronology of West Indian fossils strongly suggest that they reached the islands in the Eocene-Oligocene through transient land connections with South America or island hopping,” researchers said in a study recently published in Proceedings of the Royal Society B.

Origin story

During the late Eocene to early Oligocene periods of the mid-Cenozoic, about 34 million years ago, many terrestrial carnivores already roamed South America. Along with crocodyliform sebecids, these included enormous snakes, terror birds, and metatherians, which were monster marsupials. At this time, the sea levels were low, and the islands of the Eastern Caribbean are thought to have been connected to South America via a land bridge called GAARlandia (Greater Antilles and Aves Ridge). This is not the first land bridge to potentially provide a migration opportunity.

Fragments of a single tooth unearthed in Seven Rivers, Jamaica, in 1999 are the oldest fossil evidence of a ziphodont crocodyliform (a group that includes sebecids) in the Caribbean. It was dated to about 47 million years ago, when Jamaica was connected to an extension of the North American continent known as the Nicaragua Rise. While the tooth from Seven Rivers is thought to have belonged to a ziphodont other than a sebacid, that and other vertebrate fossils found in Jamaica suggest parallels with ecosystems excavated from sites in the American South.

The fossils found in areas like the US South that the ocean would otherwise separate suggest more than just related life forms. It’s possible that the Nicaragua Rise provided a pathway for migration similar to the one sebecids probably used when they arrived in the Caribbean islands.

Carnivorous crocodile-like monsters used to terrorize the Caribbean Read More »

cyborg-cicadas-play-pachelbel’s-canon

Cyborg cicadas play Pachelbel’s Canon

The distinctive chirps of singing cicadas are a highlight of summer in regions where they proliferate; those chirps even featured prominently on Lorde’s 2021 album Solar Power. Now, Japanese scientists at the University of Tsukuba have figured out how to transform cicadas into cyborg insects capable of “playing” Pachelbel’s Canon. They described their work in a preprint published on the physics arXiv. You can listen to the sounds here.

Scientists have been intrigued by the potential of cyborg insects since the 1990s, when researchers began implanting tiny electrodes into cockroach antennae and shocking them to direct their movements. The idea was to use them as hybrid robots for search-and-rescue applications.

For instance, in 2015, Texas A&M scientists found that implanting electrodes into a cockroach’s ganglion (the neuron cluster that controls its front legs) was remarkably effective at successfully steering the roaches 60 percent of the time. They outfitted the roaches with tiny backpacks synced with a remote controller and administered shocks to disrupt the insect’s balance, forcing it to move in the desired direction

And in 2021, scientists at Nanyang Technological University in Singapore turned Madagascar hissing cockroaches into cyborgs, implanting electrodes in sensory organs known as cerci that were then connected to tiny computers. Applying electrical current enabled them to steer the cockroaches successfully 94 percent of the time in simulated disaster scenes in the lab.

The authors of this latest paper were inspired by that 2021 project and decided to apply the basic concept to singing cicadas, with the idea that cyborg cicadas might one day be used to transmit warning messages during emergencies. It’s usually the males who do the singing, and each species has a unique song. In most species, the production of sound occurs via a pair of membrane structures called tymbals, which are just below each side of the insect’s anterior abdominal region. The tymbal muscles contract and cause the plates to vibrate while the abdomen acts as a kind of resonating chamber to amplify the song.

Cyborg cicadas play Pachelbel’s Canon Read More »

some-flies-go-insomniac-to-ward-off-parasites

Some flies go insomniac to ward off parasites

Those genes associated with metabolism were upregulated, meaning they showed an increase in activity. An observed loss of body fat and protein reserves was evidently a trade-off for resistance to mites. This suggests there was increased lipolysis, or the breakdown of fats, and proteolysis, the breakdown of proteins, in resistant lines of flies.

Parasite paranoia

The depletion of nutrients could make fruit flies less likely to survive even without mites feeding off them, but their tenaciousness when it comes to staying up through the night suggests that being parasitized by mites is still the greater risk. Because mite-resistant flies did not sleep, their oxygen consumption and activity also increased during the night to levels no different from those of control group flies during the day.

Keeping mites away involves moving around so the fly can buzz off if mites crawl too close. Knowing this, Benoit wanted to see what would happen if the resistant flies’ movement was restricted. It was doom. When the flies were restrained, the mite-resistant flies were as susceptible to mites as the controls. Activity alone was important for resisting mites.

Since mites are ectoparasites, or external parasites (as opposed to internal parasites like tapeworms), potential hosts like flies can benefit from hypervigilance. Sleep is typically beneficial to a host invaded by an internal parasite because it increases the immune response. Unfortunately for the flies, sleeping would only make them an easy meal for mites. Keeping both stereoscopic eyes out for an external parasite means there is no time left for sleep.

“The pattern of reduced sleep likely allows the flies to be more responsive during encounters with mites during the night,” the researchers said in their study, which was recently published in Biological Timing and Sleep. “There could be differences in sleep occurring during the day, but these differences may be less important as D. melanogaster sleeps much less during the day.”

Fruit flies aren’t the only creatures with sleep patterns that parasites disrupt. Evidence of shifts in sleep and rest in birds and bats has been shown to happen when there is a risk of parasitism after dark. For the flies, exhaustion has the upside of better fertility if they manage to avoid bites, so a mate must be worth all those sleepless nights.

Biological Timing and Sleep, 2025.  DOI: 10.1038/s44323-025-00031-7

Some flies go insomniac to ward off parasites Read More »