neurobiology

figuring-out-why-a-nap-might-help-people-see-things-in-new-ways

Figuring out why a nap might help people see things in new ways


An EEG signal of sleep is associated with better performance on a mental task.

The guy in the back may be doing a more useful activity. Credit: XAVIER GALIANA

Dmitri Mendeleev famously saw the complete arrangement of the periodic table after falling asleep on his desk. He claimed in his dream he saw a table where all the elements fell into place, and he wrote it all down when he woke up. By having a eureka moment right after a nap, he joined a club full of rather talented people: Mary Shelley, Thomas Edison, and Salvador Dali.

To figure out if there’s a grain of truth to all these anecdotes, a team of German scientists at the Hamburg University, led by cognitive science researcher Anika T. Löwe, conducted an experiment designed to trigger such nap-following strokes of genius—and catch them in the act with EEG brain monitoring gear. And they kind of succeeded.

Catching Edison’s cup

“Thomas Edison had this technique where he held a cup or something like that when he was napping in his chair,” says Nicolas Schuck, a professor of cognitive science at the Hamburg University and senior author of the study. “When he fell asleep too deeply, the cup falling from his hand would wake him up—he was convinced that was the way to trigger these eureka moments.” While dozing off in a chair with a book or a cup doesn’t seem particularly radical, a number of cognitive scientists got serious about re-creating Edison’s approach to insights and testing it in their experiments.

One of the recent such studies was done at Sorbonne University by Célia Lacaux, a cognitive neuroscientist, and her colleagues. Over 100 participants were presented with a mathematical problem and told it could be solved by applying two simple rules in a stepwise manner. However, there was also an undescribed shortcut that made reaching the solution much quicker. The goal was to see if participants would figure this shortcut out after an Edison-style nap. The scientists would check whether the eureka moment would show in EEG.

Lacaux’s team also experimented with different objects the participants should hold while napping: spoons, steel spheres, stress balls, etc. It turned out Edison was right, and a cup was by far the best choice. It also turned out that most participants recognized there was a hidden rule after the falling cup woke them up. The nap was brief, only long enough to enter the light, non-REM N1 phase of sleep.

Initially, Schuck’s team wanted to replicate the results of Lacaux’s study. They even bought the exact same make of cups, but the cups failed this time. “For us, it just didn’t work. People who fell asleep often didn’t drop these cups—I don’t know why,” Schuck says.

The bigger surprise, however, was that the N1 phase sleep didn’t work either.

Tracking the dots

Schuck’s team set up an experiment that involved asking 90 participants to track dots on a screen in a series of trials, with a 20-minute-long nap in between. The dots were rather small, colored either purple or orange, placed in a circle, and they moved in one of two directions. The task for the participants was to determine the direction the dots were moving. That could range from easy to really hard, depending on the amount of jitter the team introduced.

The insight the participants could discover was hidden in the color coding. After a few trials where the dots’ direction was random, the team introduced a change that tied the movement to the color: orange dots always moved in one direction, and the purple dots moved in the other. It was up to the participants to figure this out, either while awake or through a nap-induced insight.

Those dots were the first difference between Schuck’s experiment and the Sorbonne study. Lacaux had her participants cracking a mathematical problem that relied on analytical skills. Schuck’s task was more about perceptiveness and out-of-the-box thinking.

The second difference was that the cups failed to drop and wake participants up. Muscles usually relax more when sleep gets deeper, which is why most people drop whatever they’re holding either at the end of the N1 phase or at the onset of the N2 phase, when the body starts to lose voluntary motor control. “We didn’t really prevent people from reaching the N2 phase, and it turned out the participants who reached the N2 phase had eureka moments most often,” Schuck explains.

Over 80 percent of people who reached the deeper, N2 phase of sleep found the color-coding solution. Participants who fell into a light N1 sleep had a 61 percent success rate; that dropped to just 55 percent in a group that stayed awake during their 20-minute nap time. In a control group that did the same task without a nap break, only 49 percent of participants figured out the hidden trick.

The divergent results in Lacaux’s and Schuck’s experiments were puzzling, so the team looked at the EEG readouts, searching for features in the data that could predict eureka moments better than sleep phases alone. And they found something.

The slope of genius

The EEG signal in the human brain consists of low and high frequencies that can be plotted on a spectral slope. When we are awake, there are a lot of high-frequency signals, and this slope looks rather flat. During sleep, these high frequencies get muted, there are more low-frequency signals, and the slope gets steeper. Usually, the deeper we sleep, the steeper our EEG slope is.

The team noticed that eureka moments seemed to be highly correlated with a steep EEG spectral slope—the steeper the slope, the more likely people were to get a breakthrough. In fact, the models based on the EEG signal alone predicted eureka moments better than predictions made based on sleep phases and even based on the sleep phases and EEG readouts combined.

“Traditionally, people divided sleep EEG readouts down into discrete stages like N1 or N2, but as usual in biology, things in reality are not as discrete,” Schuck says. “They’re much more continuous, there’s kind of a gray zone.” He told Ars that looking specifically at the EEG trace may help us better understand what exactly happens in the brain when a sudden moments of insight arrives.

But Shuck wants to get even more data in the future. “We’re currently running a study that’s been years in the making: We want to use both EEG and [functional magnetic resonance imaging] at the same time to see what happens in the brain when people are sleeping,” Schuck says. The addition of the fMRI imaging will enable Schuck and his colleagues to see which areas of the brain get activated during sleep. What the team wants to learn from combining EEG and fMRI imagery is how sleep boosts memory consolidation.

“We also hope to get some insights, no pun intended, into the processes that play a role in generating insights,” Schuck adds.

PLOS Biology, 2025.  DOI: 10.1371/journal.pbio.3003185

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Figuring out why a nap might help people see things in new ways Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

scientists-figure-out-how-the-brain-forms-emotional-connections

Scientists figure out how the brain forms emotional connections

Whenever something bad happens to us, brain systems responsible for mediating emotions kick in to prevent it from happening again. When we get stung by a wasp, the association between pain and wasps is encoded in the region of the brain called the amygdala, which connects simple stimuli with basic emotions.

But the brain does more than simple associations; it also encodes lots of other stimuli that are less directly connected with the harmful event—things like the place where we got stung or the wasps’ nest in a nearby tree. These are combined into complex emotional models of potentially threatening circumstances.

Till now, we didn’t know exactly how these models are built. But we’re beginning to understand how it’s done.

Emotional complexity

“Decades of work has revealed how simple forms of emotional learning occurs—how sensory stimuli are paired with aversive events,” says Joshua Johansen, a team director at the Neural Circuitry of Learning and Memory at RIKEN Center for Brain Science in Tokyo. But Johansen says that these decades didn’t bring much progress in treating psychiatric conditions like anxiety and trauma-related disorders. “We thought if we could get a handle of more complex emotional processes and understand their mechanisms, we may be able to provide relief for patients with conditions like that,” Johansen claims.

To make it happen, his team performed experiments designed to trigger complex emotional processes in rats while closely monitoring their brains.

Johansen and Xiaowei Gu, his co-author and colleague at RIKEN, started by dividing the rats into two groups. The first “paired” group of rats was conditioned to associate an image with a sound. The second “unpaired” group watched the same image and listened to the same sound, but not at the same time. This prevented the rats from making an association.

Scientists figure out how the brain forms emotional connections Read More »

“infantile-amnesia”-occurs-despite-babies-showing-memory-activity

“Infantile amnesia” occurs despite babies showing memory activity

For many of us, memories of our childhood have become a bit hazy, if not vanishing entirely. But nobody really remembers much before the age of 4, because nearly all humans experience what’s termed “infantile amnesia,” in which memories that might have formed before that age seemingly vanish as we move through adolescence. And it’s not just us; the phenomenon appears to occur in a number of our fellow mammals.

The simplest explanation for this would be that the systems that form long-term memories are simply immature and don’t start working effectively until children hit the age of 4. But a recent animal experiment suggests that the situation in mice is more complex: the memories are there, they’re just not normally accessible, although they can be re-activated. Now, a study that put human infants in an MRI tube suggests that memory activity starts by the age of 1, suggesting that the results in mice may apply to us.

Less than total recall

Mice are one of the species that we know experience infantile amnesia. And, thanks to over a century of research on mice, we have some sophisticated genetic tools that allow us to explore what’s actually involved in the apparent absence of the animals’ earliest memories.

A paper that came out last year describes a series of experiments that start by having very young mice learn to associate seeing a light come on with receiving a mild shock. If nothing else is done with those mice, that association will apparently be forgotten later in life due to infantile amnesia.

But in this case, the researchers could do something. Neural activity normally results in the activation of a set of genes. In these mice, the researchers engineered it so one of the genes that gets activated encodes a protein that can modify DNA. When this protein is made, it results in permanent changes to a second gene that was inserted in the animal’s DNA. Once activated through this process, the gene leads to the production of a light-activated ion channel.

“Infantile amnesia” occurs despite babies showing memory activity Read More »

brains-of-parrots,-unlike-songbirds,-use-human-like-vocal-control

Brains of parrots, unlike songbirds, use human-like vocal control

Due to past work, we’ve already identified the brain structure that controls the activity of the key vocal organ, the syrinx, located in the bird’s throat. The new study, done by Zetian Yang and Michael Long of New York University, managed to place fine electrodes into this area of the brain in both species and track the activity of neurons there while the birds were awake and going about normal activities. This allowed them to associate neural activity with any vocalizations made by the birds. For the budgerigars, they had an average of over 1,000 calls from each of the four birds carrying the implanted electrodes.

For the zebra finch, neural activity during song production showed a pattern that was based on timing; the same neurons tended to be most active at the same point in the song. You can think of this as a bit like a player piano central organizing principle, timing when different notes should be played. “Different configurations [of neurons] are active at different moments, representing an evolving population ‘barcode,’” as Yang and Long describe this pattern.

That is not at all what was seen with the budgerigars. Here, instead, they saw patterns where the same populations of neurons tended to be active when the bird was producing a similar sound. They broke the warbles down into parts that they characterized on a scale that ranged from harmonic to noisy. They found that the groups of neurons tended to be more active whenever the warble was harmonic, and different groups tended to spike when it got noisy. Those observations led them to identify a third population, which was active whenever the budgerigars produced a low-frequency sound.

In addition, Yang and Long analyzed the pitch of the vocalizations. Only about half of the neurons in the relevant region of the brain were linked to pitch. However, the half that was linked had small groups of neurons that fired during the production of a relatively narrow range of pitches. They could use the activity of as few as five individual neurons and accurately predict the pitch of the vocalizations at the time.

Brains of parrots, unlike songbirds, use human-like vocal control Read More »

sleeping-pills-stop-the-brain’s-system-for-cleaning-out-waste

Sleeping pills stop the brain’s system for cleaning out waste


Cleanup on aisle cerebellum

A specialized system sends pulses of pressure through the fluids in our brain.

Our bodies rely on their lymphatic system to drain excessive fluids and remove waste from tissues, feeding those back into the blood stream. It’s a complex yet efficient cleaning mechanism that works in every organ except the brain. “When cells are active, they produce waste metabolites, and this also happens in the brain. Since there are no lymphatic vessels in the brain, the question was what was it that cleaned the brain,” Natalie Hauglund, a neuroscientist at Oxford University who led a recent study on the brain-clearing mechanism, told Ars.

Earlier studies done mostly on mice discovered that the brain had a system that flushed its tissues with cerebrospinal fluid, which carried away waste products in a process called glymphatic clearance. “Scientists noticed that this only happened during sleep, but it was unknown what it was about sleep that initiated this cleaning process,” Hauglund explains.

Her study found the glymphatic clearance was mediated by a hormone called norepinephrine and happened almost exclusively during the NREM sleep phase. But it only worked when sleep was natural. Anesthesia and sleeping pills shut this process down nearly completely.

Taking it slowly

The glymphatic system in the brain was discovered back in 2013 by Dr. Maiken Nedergaard, a Danish neuroscientist and a coauthor of Hauglund’s paper. Since then, there have been numerous studies aimed at figuring out how it worked, but most of them had one problem: they were done on anesthetized mice.

“What makes anesthesia useful is that you can have a very controlled setting,” Hauglund says.

Most brain imaging techniques require a subject, an animal or a human, to be still. In mouse experiments, that meant immobilizing their heads so the research team could get clear scans. “But anesthesia also shuts down some of the mechanisms in the brain,” Hauglund argues.

So, her team designed a study to see how the brain-clearing mechanism works in mice that could move freely in their cages and sleep naturally whenever they felt like it. “It turned out that with the glymphatic system, we didn’t really see the full picture when we used anesthesia,” Hauglund says.

Looking into the brain of a mouse that runs around and wiggles during sleep, though, wasn’t easy. The team pulled it off by using a technique called flow fiber photometry which works by imaging fluids tagged with fluorescent markers using a probe implanted in the brain. So, the mice got the optical fibers implanted in their brains. Once that was done, the team put fluorescent tags in the mice’s blood, cerebrospinal fluid, and on the norepinephrine hormone. “Fluorescent molecules in the cerebrospinal fluid had one wavelength, blood had another wavelength, and norepinephrine had yet another wavelength,” Hauglund says.

This way, her team could get a fairly precise idea about the brain fluid dynamics when mice were awake and asleep. And it turned out that the glymphatic system basically turned brain tissues into a slowly moving pump.

Pumping up

“Norepinephrine is released from a small area of the brain in the brain stem,” Hauglund says. “It is mainly known as a response to stressful situations. For example, in fight or flight scenarios, you see norepinephrine levels increasing.” Its main effect is causing blood vessels to contract. Still, in more recent research, people found out that during sleep, norepinephrine is released in slow waves that roll over the brain roughly once a minute. This oscillatory norepinephrine release proved crucial to the operation of the glymphatic system.

“When we used the flow fiber photometry method to look into the brains of mice, we saw these slow waves of norepinephrine, but we also saw how it works in synchrony with fluctuation in the blood volume,” Hauglund says.

Every time the norepinephrine level went up, it caused the contraction of the blood vessels in the brain, and the blood volume went down. At the same time, the contraction increased the volume of the perivascular spaces around the blood vessels, which were immediately filled with the cerebrospinal fluid.

When the norepinephrine level went down, the process worked in reverse: the blood vessels dilated, letting the blood in and pushing the cerebrospinal fluid out. “What we found was that norepinephrine worked a little bit like a conductor of an orchestra and makes the blood and cerebrospinal fluid move in synchrony in these slow waves,” Hauglund says.

And because the study was designed to monitor this process in freely moving, undisturbed mice, the team learned exactly when all this was going on. When mice were awake, the norepinephrine levels were much higher but relatively steady. The team observed the opposite during the REM sleep phase, where the norepinephrine levels were consistently low. The oscillatory behavior was present exclusively during the NREM sleep phase.

So, the team wanted to check how the glymphatic clearance would work when they gave the mice zolpidem, a sleeping drug that had been proven to increase NREM sleep time. In theory, zolpidem should have boosted brain-clearing. But it turned it off instead.

Non-sleeping pills

“When we looked at the mice after giving them zolpidem, we saw they all fell asleep very quickly. That was expected—we take zolpidem because it makes it easier for us to sleep,” Hauglund says. “But then we saw those slow fluctuations in norepinephrine, blood volume, and cerebrospinal fluid almost completely stopped.”

No fluctuations meant the glymphatic system didn’t remove any waste. This was a serious issue, because one of the cellular waste products it is supposed to remove is amyloid beta, found in the brains of patients suffering from Alzheimer’s disease.

Hauglund speculates it could be possible zolpidem induces a state very similar to sleep but at the same time it shuts down important processes that happen during sleep. While heavy zolpidem use has been associated with increased risk of the Alzheimer disease, it is not clear if this increased risk was there because the drug was inhibiting oscillatory norepinephrine release in the brain. To better understand this, Hauglund wants to get a closer look into how the glymphatic system works in humans.

“We know we have the same wave-like fluid dynamics in the brain, so this could also drive the brain clearance in humans,” Haugland told Ars. “Still, it’s very hard to look at norepinephrine in the human brain because we need an invasive technique to get to the tissue.”

But she said norepinephrine levels in people can be estimated based on indirect clues. One of them is pupil dilation and contraction, which work in in synchrony with the norepinephrine levels. Another other clue may lay in microarousals—very brief, imperceivable awakenings which, Hauglund thinks, can be correlated with the brain clearing mechanism. “I am currently interested in this phenomenon […]. Right now we have no idea why microarousals are there or what function they have” Hauglund says.

But the last step she has on her roadmap is making better sleeping pills. “We need sleeping drugs that don’t have this inhibitory effect on the norepinephrine waves. If we can have a sleeping pill that helps people sleep without disrupting their sleep at the same time it will be very important,” Hauglund concludes.

Cell, 2025. DOI: 10.1016/j.cell.2024.11.027

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Sleeping pills stop the brain’s system for cleaning out waste Read More »

how-should-we-treat-beings-that-might-be-sentient?

How should we treat beings that might be sentient?


Being aware of the maybe self-aware

A book argues that we’ve not thought enough about things that might think.

What rights should a creature with ambiguous self-awareness, like an octopus, be granted. Credit: A. Martin UW Photography

If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022—a law that protects animals whose sentience status is unclear.

According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.

Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.

A framework for fence-sitters

Non-human animals, of course, are not the only beings with an ambiguous sentience stature that poses complicated questions. Birch discusses people with disorders of consciousness, embryos and fetuses, neural organoids (brain tissue grown in a dish), and even “AI technologies that reproduce brain functions and/or mimic human behavior,” all of which share the unenviable position of being perched on the edge of sentience—a place where it is excruciatingly unclear whether or not these individuals are capable of conscious experience.

What’s needed, Birch argues, when faced with such staggering uncertainty about the sentience stature of other beings, is a precautionary framework that outlines best practices for decision-making regarding their care. And in The Edge of Sentience, he provides exactly that, in meticulous, orderly detail.

Over more than 300 pages, he outlines three fundamental framework principles and 26 specific case proposals about how to handle complex situations related to the care and treatment of sentience-edgers. For example, Proposal 2 cautions that “a patient with a prolonged disorder of consciousness should not be assumed incapable of experience” and suggests that medical decisions made on their behalf cautiously presume they are capable of feeling pain. Proposal 16 warns about conflating brain size, intelligence, and sentience, and recommends decoupling the three so that we do not incorrectly assume that small-brained animals are incapable of conscious experience.

Surgeries and stem cells

Be forewarned, some topics in The Edge of Sentience are difficult. For example, Chapter 10 covers embryos and fetuses. In the 1980s, Birch shares, it was common practice to not use anesthesia on newborn babies or fetuses when performing surgery. Why? Because whether or not newborns and fetuses experience pain was up for debate. Rather than put newborns and fetuses through the risks associated with anesthesia, it was accepted practice to give them a paralytic (which prevents all movement) and carry on with invasive procedures, up to and including heart surgery.

After parents raised alarms over the devastating outcomes of this practice, such as infant mortality, it was eventually changed. Birch’s takeaway message is clear: When in doubt about the sentience stature of a living being, we should probably assume it is capable of experiencing pain and take all necessary precautions to prevent it from suffering. To presume the opposite can be unethical.

This guidance is repeated throughout the book. Neural organoids, discussed in Chapter 11, are mini-models of brains developed from stem cells. The potential for scientists to use neural organoids to unravel the mechanisms of debilitating neurological conditions—and to avoid invasive animal research while doing so—is immense. It is also ethical, Birch posits, since studying organoids lessens the suffering of research animals. However, we don’t yet know whether or not neural tissue grown in a dish has the potential to develop sentience, so he argues that we need to develop a precautionary approach that balances the benefits of reduced animal research against the risk that neural organoids are capable of being sentient.

A four-pronged test

Along this same line, Birch says, all welfare decisions regarding sentience-edgers require an assessment of proportionality. We must balance the nature of a given proposed risk to a sentience candidate with potential harms that could result if nothing is done to minimize the risk. To do this, he suggests testing four criteria: permissibility-in-principle, adequacy, reasonable necessity, and consistency. Birch refers to this assessment process as PARC, and deep dives into its implementation in chapter eight.

When applying the PARC criteria, one begins by testing permissibility-in-principle: whether or not the proposed response to a risk is ethically permissible. To illustrate this, Birch poses a hypothetical question: would it be ethically permissible to mandate vaccination in response to a pandemic? If a panel of citizens were in charge of answering this question, they might say “no,” because forcing people to be vaccinated feels unethical. Yet, when faced with the same question, a panel of experts might say “yes,” because allowing people to die who could be saved by vaccination also feels unethical. Gauging permissibility-in-principle, therefore, entails careful consideration of the likely possible outcomes of a proposed response. If an outcome is deemed ethical, it is permissible.

Next, the adequacy of a proposed response must be tested. A proportionate response to a risk must do enough to lessen the risk. This means the risk must be reduced to “an acceptable level” or, if that’s not possible, a response should “deliver the best level of risk reduction that can be achieved” via an ethically permissible option.

The third test is reasonable necessity. A proposed response to a risk must not overshoot—it should not go beyond what is reasonably necessary to reduce risk, in terms of either cost or imposed harm. And last, consistency should be considered. The example Birch presents is animal welfare policy. He suggests we should always “aim for taxonomic consistency: our treatment of one group of animals (e.g., vertebrates) should be consistent with our treatment of another (e.g., invertebrates).”

The Edge of Sentience, as a whole, is a dense text overflowing with philosophical rhetoric. Yet this rhetoric plays a crucial role in the storytelling: it is the backbone for Birch’s clear and organized conclusions, and it serves as a jumping-off point for the logical progression of his arguments. Much like “I think, therefore I am” gave René Descartes a foundation upon which to build his idea of substance dualism, Birch uses the fundamental position that humans should not inflict gratuitous suffering onto fellow creatures as a base upon which to build his precautionary framework.

For curious readers who would prefer not to wade too deeply into meaty philosophical concepts, Birch generously provides a shortcut to his conclusions: a cheat sheet of his framework principles and special case proposals is presented at the front of the book.

Birch’s ultimate message in The Edge of Sentience is that a massive shift in how we view beings with a questionable sentience status should be made. And we should ideally make this change now, rather than waiting for scientific research to infallibly determine who and what is sentient. Birch argues that one way that citizens and policy-makers can begin this process is by adopting the following decision-making framework: always avoid inflicting gratuitous suffering on sentience candidates; take precautions when making decisions regarding a sentience candidate; and make proportional decisions about the care of sentience candidates that are “informed, democratic and inclusive.”

You might be tempted to shake your head at Birch’s confidence in humanity. No matter how deeply you agree with his stance of doing no harm, it’s hard to have confidence in humanity given our track record of not making big changes for the benefit of living creatures, even when said creatures includes our own species (cue in global warming here). It seems excruciatingly unlikely that the entire world will adopt Birch’s rational, thoughtful, comprehensive plan for reducing the suffering of all potentially sentient creatures. Yet Birch, a philosopher at heart, ignores human history and maintains a tone of articulate, patient optimism. He clearly believes in us—he knows we can do better—and he offers to hold our hands and walk us through the steps to do so.

Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.

How should we treat beings that might be sentient? Read More »

tweaking-non-neural-brain-cells-can-cause-memories-to-fade

Tweaking non-neural brain cells can cause memories to fade


Neurons and a second cell type called an astrocyte collaborate to hold memories.

Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke

“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.

Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.

One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.

Marking star cells

Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.

“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.

Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.

“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.

The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.

Modulating engram neurons

“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.

“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.

This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.

But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.

Selectively forgetting

The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.

So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.

The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.

After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.

“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.

The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.

Nature, 2024.  DOI: 10.1038/s41586-024-08170-w

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Tweaking non-neural brain cells can cause memories to fade Read More »

bats-use-echolocation-to-make-mental-maps-for-navigation

Bats use echolocation to make mental maps for navigation

Bat maps

To evaluate the route each bat took to get back to the roost, the team used their simulations to measure the echoic entropy it experienced along the way. The field where the bats were released was a low echoic entropy area, so during those first few minutes when they were flying around they were likely just looking for some more distinct, higher entropy landmarks to figure out where they were. Once they were oriented, they started flying to the roost, but not in a straight line. They meandered a bit, and the groups with higher sensory deprivation tended to meander more.

The meandering, researchers suspect, was due to trouble the bats had with maintaining the steady path relying on echolocation alone. When they were detecting distinctive landmarks like a specific orchard, they corrected the course. Repeating the process eventually brought them to their roost.

But could this be landmark-based navigation? Or perhaps simple beaconing, where an animal locks onto something like a distant light and moves toward it?

The researchers argue in favor of cognitive acoustic maps. “I think if echolocation wasn’t such a limited sensory modality, we couldn’t reach a conclusion about the bats using cognitive acoustic maps,” Goldshtein says. The distance between landmarks the bats used to correct their flight path was significantly longer than echolocation’s sensing range. Yet they knew which direction the roost was relative to one landmark, even when the next landmark on the way was acoustically invisible. You can’t do that without having the area mapped.

“It would be really interesting to understand how other bats do that, to compare between species,” Goldshtein says. There are bats that fly over a thousand meters above the ground, so they simply can’t sense any landmarks using echolocation. Other species hunt over sea, which, as per this team’s simulations, would be just one huge low-entropy area. “We are just starting. That’s why I do not study only navigation but also housing, foraging, and other aspects of their behavior. I think we still don’t know enough about bats in general,” Goldshtein claims.

Science, 2024.  DOI: 10.1126/science.adn6269

Bats use echolocation to make mental maps for navigation Read More »

bizarre-fish-has-sensory-“legs”-it-uses-for-walking-and-tasting

Bizarre fish has sensory “legs” it uses for walking and tasting

Finding out what controls the formation of sensory legs meant growing sea robins from eggs. The research team observed that the legs of sea robins develop from the three pectoral fin rays that are around the stomach area of the fish, then separate from the fin as they continue to develop. Among the most active genes in the developing legs is the transcription factor (a protein that binds to DNA and turns genes on and off) known as tbx3a. When genetically engineered sea robins had tbx3a edited out with CRISPR-Cas9, it resulted in fewer legs, deformed legs, or both.

“Disruption of tbx3a results in upregulation of pectoral fin markers prior to leg separation, indicating that leg rays become more similar to fins in the absence of tbx3a,” the researchers said in a second study, also published in Current Biology.

To see whether genes for sensory legs are a dominant feature, the research team also tried creating sea robin hybrids, crossing species with and without sensory legs. This resulted in offspring with legs that had sensory capabilities, indicating that it’s a genetically dominant trait.

Exactly why sea robins evolved the way they did is still unknown, but the research team came up with a hypothesis. They think the legs of sea robin ancestors were originally intended for locomotion, but they gradually started gaining some sensory utility, allowing the animal to search the visible surface of the seafloor for food. Those fish that needed to search deeper for food developed sensory legs that allowed them to taste and dig for hidden prey.

“Future work will leverage the remarkable biodiversity of sea robins to understand the genetic basis of novel trait formation and diversification in vertebrates,” the team also said in the first study. “Our work represents a basis for understanding how novel traits evolve.”

Current Biology, 2024. DOI:  10.1016/j.cub.2024.08.014, 10.1016/j.cub.2024.08.042

Bizarre fish has sensory “legs” it uses for walking and tasting Read More »

karaoke-reveals-why-we-blush

Karaoke reveals why we blush

Singing for science —

Volunteers watched their own performances as an MRI tracked brain activity.

A hand holding a microphone against a blurry backdrop, taken from an angle that implies the microphone is directly in front of your face.

Singing off-key in front of others is one way to get embarrassed. Regardless of how you get there, why does embarrassment almost inevitably come with burning cheeks that turn an obvious shade of red (which is possibly even more embarrassing)?

Blushing starts not in the face but in the brain, though exactly where has been debated. Previous thinking often reasoned that the blush reaction was associated with higher socio-cognitive processes, such as thinking of how one is perceived by others.

After studying subjects who watched videos of themselves singing karaoke, however, researchers led by Milica Nicolic of the University of Amsterdam have found that blushing is really the result of specific emotions being aroused.

Nicolic’s findings suggest that blushing “is a consequence of a high level of ambivalent emotional arousal that occurs when a person feels threatened and wants to flee but, at the same time, feels the urge not to give up,” as she and her colleagues put it in a study recently published in Proceedings of the Royal Society B.

Taking the stage

The researchers sought out test subjects who were most likely to blush when watching themselves sing bad karaoke: adolescent girls. Adolescents tend to be much more self-aware and more sensitive to being judged by others than adults are.

The subjects couldn’t pick just any song. Nicolic and her team had made sure to give them a choice of four songs that music experts had deemed difficult, which is why they selected “Hello” by Adele, “Let it Go” from Frozen, “All I Want For Christmas is You” by Mariah Carey, and “All the Things You Said” by tATu. Videos of the subjects were recorded as they sang.

On their second visit to the lab, subjects were put in an MRI scanner and were shown videos of themselves and others singing karaoke. They watched 15 video clips of themselves singing and, as a control, 15 segments of someone who was thought to have similar singing ability, so secondhand embarrassment could be ruled out.

The other control factor was videos of professional singers disguised as participants. Because the professionals sang better overall, it was unlikely they would trigger secondhand embarrassment.

Enough to make you blush

The researchers checked for an increase in cheek temperature, as blood flow measurements had been used in past studies but are more prone to error. This was measured with a fast-response temperature transducer as the subjects watched karaoke videos.

It was only when the subjects watched themselves sing that cheek temperature went up. There was virtually no increase or decrease when watching others—meaning no secondhand embarrassment—and a slight decrease when watching a professional singer.

The MRI scans revealed which regions of the brain were activated as subjects watched videos of themselves. These include the anterior insular cortex, or anterior insula, which responds to a range of emotions, including fear, anxiety, and, of course, embarrassment. There was also the mid-cingulate cortex, which emotionally and cognitively manages pain—including embarrassment—by trying to anticipate that pain and reacting with aversion and avoidance. The dorsolateral prefrontal cortex, which helps process fear and anxiety, also lit up.

There was also more activity detected in the cerebellum, which is responsible for much of the emotional processing in the brain, when subjects watched themselves sing. Those who blushed more while watching their own video clips showed the most cerebellum activity. This could mean they were feeling stronger emotions.

What surprised the researchers was that there was no additional activation in areas known for being involved in the process of understanding one’s mental state, meaning someone’s opinion of what others might think of them may not be necessary for blushing to happen.

So blushing is really more about the surge of emotions someone feels when being faced with things that pertain to the self and not so much about worrying what other people think. That can definitely happen if you’re watching a video of your own voice cracking at the high notes in an Adele song.

Proceedings of the Royal Society B, 2024.  DOI: 10.1098/rspb.2024.0958

Karaoke reveals why we blush Read More »