For many of us, memories of our childhood have become a bit hazy, if not vanishing entirely. But nobody really remembers much before the age of 4, because nearly all humans experience what’s termed “infantile amnesia,” in which memories that might have formed before that age seemingly vanish as we move through adolescence. And it’s not just us; the phenomenon appears to occur in a number of our fellow mammals.
The simplest explanation for this would be that the systems that form long-term memories are simply immature and don’t start working effectively until children hit the age of 4. But a recent animal experiment suggests that the situation in mice is more complex: the memories are there, they’re just not normally accessible, although they can be re-activated. Now, a study that put human infants in an MRI tube suggests that memory activity starts by the age of 1, suggesting that the results in mice may apply to us.
Less than total recall
Mice are one of the species that we know experience infantile amnesia. And, thanks to over a century of research on mice, we have some sophisticated genetic tools that allow us to explore what’s actually involved in the apparent absence of the animals’ earliest memories.
A paper that came out last year describes a series of experiments that start by having very young mice learn to associate seeing a light come on with receiving a mild shock. If nothing else is done with those mice, that association will apparently be forgotten later in life due to infantile amnesia.
But in this case, the researchers could do something. Neural activity normally results in the activation of a set of genes. In these mice, the researchers engineered it so one of the genes that gets activated encodes a protein that can modify DNA. When this protein is made, it results in permanent changes to a second gene that was inserted in the animal’s DNA. Once activated through this process, the gene leads to the production of a light-activated ion channel.
Due to past work, we’ve already identified the brain structure that controls the activity of the key vocal organ, the syrinx, located in the bird’s throat. The new study, done by Zetian Yang and Michael Long of New York University, managed to place fine electrodes into this area of the brain in both species and track the activity of neurons there while the birds were awake and going about normal activities. This allowed them to associate neural activity with any vocalizations made by the birds. For the budgerigars, they had an average of over 1,000 calls from each of the four birds carrying the implanted electrodes.
For the zebra finch, neural activity during song production showed a pattern that was based on timing; the same neurons tended to be most active at the same point in the song. You can think of this as a bit like a player piano central organizing principle, timing when different notes should be played. “Different configurations [of neurons] are active at different moments, representing an evolving population ‘barcode,’” as Yang and Long describe this pattern.
That is not at all what was seen with the budgerigars. Here, instead, they saw patterns where the same populations of neurons tended to be active when the bird was producing a similar sound. They broke the warbles down into parts that they characterized on a scale that ranged from harmonic to noisy. They found that the groups of neurons tended to be more active whenever the warble was harmonic, and different groups tended to spike when it got noisy. Those observations led them to identify a third population, which was active whenever the budgerigars produced a low-frequency sound.
In addition, Yang and Long analyzed the pitch of the vocalizations. Only about half of the neurons in the relevant region of the brain were linked to pitch. However, the half that was linked had small groups of neurons that fired during the production of a relatively narrow range of pitches. They could use the activity of as few as five individual neurons and accurately predict the pitch of the vocalizations at the time.
A specialized system sends pulses of pressure through the fluids in our brain.
Our bodies rely on their lymphatic system to drain excessive fluids and remove waste from tissues, feeding those back into the blood stream. It’s a complex yet efficient cleaning mechanism that works in every organ except the brain. “When cells are active, they produce waste metabolites, and this also happens in the brain. Since there are no lymphatic vessels in the brain, the question was what was it that cleaned the brain,” Natalie Hauglund, a neuroscientist at Oxford University who led a recent study on the brain-clearing mechanism, told Ars.
Earlier studies done mostly on mice discovered that the brain had a system that flushed its tissues with cerebrospinal fluid, which carried away waste products in a process called glymphatic clearance. “Scientists noticed that this only happened during sleep, but it was unknown what it was about sleep that initiated this cleaning process,” Hauglund explains.
Her study found the glymphatic clearance was mediated by a hormone called norepinephrine and happened almost exclusively during the NREM sleep phase. But it only worked when sleep was natural. Anesthesia and sleeping pills shut this process down nearly completely.
Taking it slowly
The glymphatic system in the brain was discovered back in 2013 by Dr. Maiken Nedergaard, a Danish neuroscientist and a coauthor of Hauglund’s paper. Since then, there have been numerous studies aimed at figuring out how it worked, but most of them had one problem: they were done on anesthetized mice.
“What makes anesthesia useful is that you can have a very controlled setting,” Hauglund says.
Most brain imaging techniques require a subject, an animal or a human, to be still. In mouse experiments, that meant immobilizing their heads so the research team could get clear scans. “But anesthesia also shuts down some of the mechanisms in the brain,” Hauglund argues.
So, her team designed a study to see how the brain-clearing mechanism works in mice that could move freely in their cages and sleep naturally whenever they felt like it. “It turned out that with the glymphatic system, we didn’t really see the full picture when we used anesthesia,” Hauglund says.
Looking into the brain of a mouse that runs around and wiggles during sleep, though, wasn’t easy. The team pulled it off by using a technique called flow fiber photometry which works by imaging fluids tagged with fluorescent markers using a probe implanted in the brain. So, the mice got the optical fibers implanted in their brains. Once that was done, the team put fluorescent tags in the mice’s blood, cerebrospinal fluid, and on the norepinephrine hormone. “Fluorescent molecules in the cerebrospinal fluid had one wavelength, blood had another wavelength, and norepinephrine had yet another wavelength,” Hauglund says.
This way, her team could get a fairly precise idea about the brain fluid dynamics when mice were awake and asleep. And it turned out that the glymphatic system basically turned brain tissues into a slowly moving pump.
Pumping up
“Norepinephrine is released from a small area of the brain in the brain stem,” Hauglund says. “It is mainly known as a response to stressful situations. For example, in fight or flight scenarios, you see norepinephrine levels increasing.” Its main effect is causing blood vessels to contract. Still, in more recent research, people found out that during sleep, norepinephrine is released in slow waves that roll over the brain roughly once a minute. This oscillatory norepinephrine release proved crucial to the operation of the glymphatic system.
“When we used the flow fiber photometry method to look into the brains of mice, we saw these slow waves of norepinephrine, but we also saw how it works in synchrony with fluctuation in the blood volume,” Hauglund says.
Every time the norepinephrine level went up, it caused the contraction of the blood vessels in the brain, and the blood volume went down. At the same time, the contraction increased the volume of the perivascular spaces around the blood vessels, which were immediately filled with the cerebrospinal fluid.
When the norepinephrine level went down, the process worked in reverse: the blood vessels dilated, letting the blood in and pushing the cerebrospinal fluid out. “What we found was that norepinephrine worked a little bit like a conductor of an orchestra and makes the blood and cerebrospinal fluid move in synchrony in these slow waves,” Hauglund says.
And because the study was designed to monitor this process in freely moving, undisturbed mice, the team learned exactly when all this was going on. When mice were awake, the norepinephrine levels were much higher but relatively steady. The team observed the opposite during the REM sleep phase, where the norepinephrine levels were consistently low. The oscillatory behavior was present exclusively during the NREM sleep phase.
So, the team wanted to check how the glymphatic clearance would work when they gave the mice zolpidem, a sleeping drug that had been proven to increase NREM sleep time. In theory, zolpidem should have boosted brain-clearing. But it turned it off instead.
Non-sleeping pills
“When we looked at the mice after giving them zolpidem, we saw they all fell asleep very quickly. That was expected—we take zolpidem because it makes it easier for us to sleep,” Hauglund says. “But then we saw those slow fluctuations in norepinephrine, blood volume, and cerebrospinal fluid almost completely stopped.”
No fluctuations meant the glymphatic system didn’t remove any waste. This was a serious issue, because one of the cellular waste products it is supposed to remove is amyloid beta, found in the brains of patients suffering from Alzheimer’s disease.
Hauglund speculates it could be possible zolpidem induces a state very similar to sleep but at the same time it shuts down important processes that happen during sleep. While heavy zolpidem use has been associated with increased risk of the Alzheimer disease, it is not clear if this increased risk was there because the drug was inhibiting oscillatory norepinephrine release in the brain. To better understand this, Hauglund wants to get a closer look into how the glymphatic system works in humans.
“We know we have the same wave-like fluid dynamics in the brain, so this could also drive the brain clearance in humans,” Haugland told Ars. “Still, it’s very hard to look at norepinephrine in the human brain because we need an invasive technique to get to the tissue.”
But she said norepinephrine levels in people can be estimated based on indirect clues. One of them is pupil dilation and contraction, which work in in synchrony with the norepinephrine levels. Another other clue may lay in microarousals—very brief, imperceivable awakenings which, Hauglund thinks, can be correlated with the brain clearing mechanism. “I am currently interested in this phenomenon […]. Right now we have no idea why microarousals are there or what function they have” Hauglund says.
But the last step she has on her roadmap is making better sleeping pills. “We need sleeping drugs that don’t have this inhibitory effect on the norepinephrine waves. If we can have a sleeping pill that helps people sleep without disrupting their sleep at the same time it will be very important,” Hauglund concludes.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
A book argues that we’ve not thought enough about things that might think.
What rights should a creature with ambiguous self-awareness, like an octopus, be granted. Credit: A. Martin UW Photography
If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022—a law that protects animals whose sentience status is unclear.
According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.
Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.
A framework for fence-sitters
Non-human animals, of course, are not the only beings with an ambiguous sentience stature that poses complicated questions. Birch discusses people with disorders of consciousness, embryos and fetuses, neural organoids (brain tissue grown in a dish), and even “AI technologies that reproduce brain functions and/or mimic human behavior,” all of which share the unenviable position of being perched on the edge of sentience—a place where it is excruciatingly unclear whether or not these individuals are capable of conscious experience.
What’s needed, Birch argues, when faced with such staggering uncertainty about the sentience stature of other beings, is a precautionary framework that outlines best practices for decision-making regarding their care. And in The Edge of Sentience, he provides exactly that, in meticulous, orderly detail.
Over more than 300 pages, he outlines three fundamental framework principles and 26 specific case proposals about how to handle complex situations related to the care and treatment of sentience-edgers. For example, Proposal 2 cautions that “a patient with a prolonged disorder of consciousness should not be assumed incapable of experience” and suggests that medical decisions made on their behalf cautiously presume they are capable of feeling pain. Proposal 16 warns about conflating brain size, intelligence, and sentience, and recommends decoupling the three so that we do not incorrectly assume that small-brained animals are incapable of conscious experience.
Surgeries and stem cells
Be forewarned, some topics in The Edge of Sentience are difficult. For example, Chapter 10 covers embryos and fetuses. In the 1980s, Birch shares, it was common practice to not use anesthesia on newborn babies or fetuses when performing surgery. Why? Because whether or not newborns and fetuses experience pain was up for debate. Rather than put newborns and fetuses through the risks associated with anesthesia, it was accepted practice to give them a paralytic (which prevents all movement) and carry on with invasive procedures, up to and including heart surgery.
After parents raised alarms over the devastating outcomes of this practice, such as infant mortality, it was eventually changed. Birch’s takeaway message is clear: When in doubt about the sentience stature of a living being, we should probably assume it is capable of experiencing pain and take all necessary precautions to prevent it from suffering. To presume the opposite can be unethical.
This guidance is repeated throughout the book. Neural organoids, discussed in Chapter 11, are mini-models of brains developed from stem cells. The potential for scientists to use neural organoids to unravel the mechanisms of debilitating neurological conditions—and to avoid invasive animal research while doing so—is immense. It is also ethical, Birch posits, since studying organoids lessens the suffering of research animals. However, we don’t yet know whether or not neural tissue grown in a dish has the potential to develop sentience, so he argues that we need to develop a precautionary approach that balances the benefits of reduced animal research against the risk that neural organoids are capable of being sentient.
A four-pronged test
Along this same line, Birch says, all welfare decisions regarding sentience-edgers require an assessment of proportionality. We must balance the nature of a given proposed risk to a sentience candidate with potential harms that could result if nothing is done to minimize the risk. To do this, he suggests testing four criteria: permissibility-in-principle, adequacy, reasonable necessity, and consistency. Birch refers to this assessment process as PARC, and deep dives into its implementation in chapter eight.
When applying the PARC criteria, one begins by testing permissibility-in-principle: whether or not the proposed response to a risk is ethically permissible. To illustrate this, Birch poses a hypothetical question: would it be ethically permissible to mandate vaccination in response to a pandemic? If a panel of citizens were in charge of answering this question, they might say “no,” because forcing people to be vaccinated feels unethical. Yet, when faced with the same question, a panel of experts might say “yes,” because allowing people to die who could be saved by vaccination also feels unethical. Gauging permissibility-in-principle, therefore, entails careful consideration of the likely possible outcomes of a proposed response. If an outcome is deemed ethical, it is permissible.
Next, the adequacy of a proposed response must be tested. A proportionate response to a risk must do enough to lessen the risk. This means the risk must be reduced to “an acceptable level” or, if that’s not possible, a response should “deliver the best level of risk reduction that can be achieved” via an ethically permissible option.
The third test is reasonable necessity. A proposed response to a risk must not overshoot—it should not go beyond what is reasonably necessary to reduce risk, in terms of either cost or imposed harm. And last, consistency should be considered. The example Birch presents is animal welfare policy. He suggests we should always “aim for taxonomic consistency: our treatment of one group of animals (e.g., vertebrates) should be consistent with our treatment of another (e.g., invertebrates).”
The Edge of Sentience, as a whole, is a dense text overflowing with philosophical rhetoric. Yet this rhetoric plays a crucial role in the storytelling: it is the backbone for Birch’s clear and organized conclusions, and it serves as a jumping-off point for the logical progression of his arguments. Much like “I think, therefore I am” gave René Descartes a foundation upon which to build his idea of substance dualism, Birch uses the fundamental position that humans should not inflict gratuitous suffering onto fellow creatures as a base upon which to build his precautionary framework.
For curious readers who would prefer not to wade too deeply into meaty philosophical concepts, Birch generously provides a shortcut to his conclusions: a cheat sheet of his framework principles and special case proposals is presented at the front of the book.
Birch’s ultimate message in The Edge of Sentience is that a massive shift in how we view beings with a questionable sentience status should be made. And we should ideally make this change now, rather than waiting for scientific research to infallibly determine who and what is sentient. Birch argues that one way that citizens and policy-makers can begin this process is by adopting the following decision-making framework: always avoid inflicting gratuitous suffering on sentience candidates; take precautions when making decisions regarding a sentience candidate; and make proportional decisions about the care of sentience candidates that are “informed, democratic and inclusive.”
You might be tempted to shake your head at Birch’s confidence in humanity. No matter how deeply you agree with his stance of doing no harm, it’s hard to have confidence in humanity given our track record of not making big changes for the benefit of living creatures, even when said creatures includes our own species (cue in global warming here). It seems excruciatingly unlikely that the entire world will adopt Birch’s rational, thoughtful, comprehensive plan for reducing the suffering of all potentially sentient creatures. Yet Birch, a philosopher at heart, ignores human history and maintains a tone of articulate, patient optimism. He clearly believes in us—he knows we can do better—and he offers to hold our hands and walk us through the steps to do so.
Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.
Neurons and a second cell type called an astrocyte collaborate to hold memories.
Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke
“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.
Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.
One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.
Marking star cells
Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.
“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.
Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.
“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.
The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.
Modulating engram neurons
“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.
“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.
This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.
But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.
Selectively forgetting
The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.
So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.
The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.
After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.
“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.
The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
To evaluate the route each bat took to get back to the roost, the team used their simulations to measure the echoic entropy it experienced along the way. The field where the bats were released was a low echoic entropy area, so during those first few minutes when they were flying around they were likely just looking for some more distinct, higher entropy landmarks to figure out where they were. Once they were oriented, they started flying to the roost, but not in a straight line. They meandered a bit, and the groups with higher sensory deprivation tended to meander more.
The meandering, researchers suspect, was due to trouble the bats had with maintaining the steady path relying on echolocation alone. When they were detecting distinctive landmarks like a specific orchard, they corrected the course. Repeating the process eventually brought them to their roost.
But could this be landmark-based navigation? Or perhaps simple beaconing, where an animal locks onto something like a distant light and moves toward it?
The researchers argue in favor of cognitive acoustic maps. “I think if echolocation wasn’t such a limited sensory modality, we couldn’t reach a conclusion about the bats using cognitive acoustic maps,” Goldshtein says. The distance between landmarks the bats used to correct their flight path was significantly longer than echolocation’s sensing range. Yet they knew which direction the roost was relative to one landmark, even when the next landmark on the way was acoustically invisible. You can’t do that without having the area mapped.
“It would be really interesting to understand how other bats do that, to compare between species,” Goldshtein says. There are bats that fly over a thousand meters above the ground, so they simply can’t sense any landmarks using echolocation. Other species hunt over sea, which, as per this team’s simulations, would be just one huge low-entropy area. “We are just starting. That’s why I do not study only navigation but also housing, foraging, and other aspects of their behavior. I think we still don’t know enough about bats in general,” Goldshtein claims.
Finding out what controls the formation of sensory legs meant growing sea robins from eggs. The research team observed that the legs of sea robins develop from the three pectoral fin rays that are around the stomach area of the fish, then separate from the fin as they continue to develop. Among the most active genes in the developing legs is the transcription factor (a protein that binds to DNA and turns genes on and off) known as tbx3a. When genetically engineered sea robins had tbx3a edited out with CRISPR-Cas9, it resulted in fewer legs, deformed legs, or both.
“Disruption of tbx3a results in upregulation of pectoral fin markers prior to leg separation, indicating that leg rays become more similar to fins in the absence of tbx3a,” the researchers said in a second study, also published in Current Biology.
To see whether genes for sensory legs are a dominant feature, the research team also tried creating sea robin hybrids, crossing species with and without sensory legs. This resulted in offspring with legs that had sensory capabilities, indicating that it’s a genetically dominant trait.
Exactly why sea robins evolved the way they did is still unknown, but the research team came up with a hypothesis. They think the legs of sea robin ancestors were originally intended for locomotion, but they gradually started gaining some sensory utility, allowing the animal to search the visible surface of the seafloor for food. Those fish that needed to search deeper for food developed sensory legs that allowed them to taste and dig for hidden prey.
“Future work will leverage the remarkable biodiversity of sea robins to understand the genetic basis of novel trait formation and diversification in vertebrates,” the team also said in the first study. “Our work represents a basis for understanding how novel traits evolve.”
Singing off-key in front of others is one way to get embarrassed. Regardless of how you get there, why does embarrassment almost inevitably come with burning cheeks that turn an obvious shade of red (which is possibly even more embarrassing)?
Blushing starts not in the face but in the brain, though exactly where has been debated. Previous thinking often reasoned that the blush reaction was associated with higher socio-cognitive processes, such as thinking of how one is perceived by others.
After studying subjects who watched videos of themselves singing karaoke, however, researchers led by Milica Nicolic of the University of Amsterdam have found that blushing is really the result of specific emotions being aroused.
Nicolic’s findings suggest that blushing “is a consequence of a high level of ambivalent emotional arousal that occurs when a person feels threatened and wants to flee but, at the same time, feels the urge not to give up,” as she and her colleagues put it in a study recently published in Proceedings of the Royal Society B.
Taking the stage
The researchers sought out test subjects who were most likely to blush when watching themselves sing bad karaoke: adolescent girls. Adolescents tend to be much more self-aware and more sensitive to being judged by others than adults are.
The subjects couldn’t pick just any song. Nicolic and her team had made sure to give them a choice of four songs that music experts had deemed difficult, which is why they selected “Hello” by Adele, “Let it Go” from Frozen, “All I Want For Christmas is You” by Mariah Carey, and “All the Things You Said” by tATu. Videos of the subjects were recorded as they sang.
On their second visit to the lab, subjects were put in an MRI scanner and were shown videos of themselves and others singing karaoke. They watched 15 video clips of themselves singing and, as a control, 15 segments of someone who was thought to have similar singing ability, so secondhand embarrassment could be ruled out.
The other control factor was videos of professional singers disguised as participants. Because the professionals sang better overall, it was unlikely they would trigger secondhand embarrassment.
Enough to make you blush
The researchers checked for an increase in cheek temperature, as blood flow measurements had been used in past studies but are more prone to error. This was measured with a fast-response temperature transducer as the subjects watched karaoke videos.
It was only when the subjects watched themselves sing that cheek temperature went up. There was virtually no increase or decrease when watching others—meaning no secondhand embarrassment—and a slight decrease when watching a professional singer.
The MRI scans revealed which regions of the brain were activated as subjects watched videos of themselves. These include the anterior insular cortex, or anterior insula, which responds to a range of emotions, including fear, anxiety, and, of course, embarrassment. There was also the mid-cingulate cortex, which emotionally and cognitively manages pain—including embarrassment—by trying to anticipate that pain and reacting with aversion and avoidance. The dorsolateral prefrontal cortex, which helps process fear and anxiety, also lit up.
There was also more activity detected in the cerebellum, which is responsible for much of the emotional processing in the brain, when subjects watched themselves sing. Those who blushed more while watching their own video clips showed the most cerebellum activity. This could mean they were feeling stronger emotions.
What surprised the researchers was that there was no additional activation in areas known for being involved in the process of understanding one’s mental state, meaning someone’s opinion of what others might think of them may not be necessary for blushing to happen.
So blushing is really more about the surge of emotions someone feels when being faced with things that pertain to the self and not so much about worrying what other people think. That can definitely happen if you’re watching a video of your own voice cracking at the high notes in an Adele song.
Enlarge/ Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)
BSIP/Universal Images Group via Getty Images
“Language is a huge field, and we are novices in this. We know a lot about how different areas of the brain are involved in linguistic tasks, but the details are not very clear,” says Mohsen Jamali, a computational neuroscience researcher at Harvard Medical School who led a recent study into the mechanism of human language comprehension.
“What was unique in our work was that we were looking at single neurons. There is a lot of studies like that on animals—studies in electrophysiology, but they are very limited in humans. We had a unique opportunity to access neurons in humans,” Jamali adds.
Probing the brain
Jamali’s experiment involved playing recorded sets of words to patients who, for clinical reasons, had implants that monitored the activity of neurons located in their left prefrontal cortex—the area that’s largely responsible for processing language. “We had data from two types of electrodes: the old-fashioned tungsten microarrays that can pick the activity of a few neurons; and the Neuropixel probes which are the latest development in electrophysiology,” Jamali says. The Neuropixels were first inserted in human patients in 2022 and could record the activity of over a hundred neurons.
“So we were in the operation room and asked the patient to participate. We had a mixture of sentences and words, including gibberish sounds that weren’t actual words but sounded like words. We also had a short story about Elvis,” Jamali explains. He said the goal was to figure out if there was some structure to the neuronal response to language. Gibberish words were used as a control to see if the neurons responded to them in a different way.
“The electrodes we used in the study registered voltage—it was a continuous signal at 30 kHz sampling rate—and the critical part was to dissociate how many neurons we had in each recording channel. We used statistical analysis to separate individual neurons in the signal,” Jamali says. Then, his team synchronized the neuronal activity signals with the recordings played to the patients down to a millisecond and started analyzing the data they gathered.
Putting words in drawers
“First, we translated words in our sets to vectors,” Jamali says. Specifically, his team used the Word2Vec, a technique used in computer science to find relationships between words contained in a large corpus of text. What Word2Vec can do is tell if certain words have something in common—if they are synonyms, for example. “Each word was represented by a vector in a 300-dimensional space. Then we just looked at the distance between those vectors and if the distance was close, we concluded the words belonged in the same category,” Jamali explains.
Then the team used these vectors to identify words that clustered together, which suggested they had something in common (something they later confirmed by examining which words were in a cluster together). They then determined whether specific neurons responded differently to different clusters of words. It turned out they did.
“We ended up with nine clusters. We looked at which words were in those clusters and labeled them,” Jamali says. It turned out that each cluster corresponded to a neat semantic domain. Specialized neurons responded to words referring to animals, while other groups responded to words referring to feelings, activities, names, weather, and so on. “Most of the neurons we registered had one preferred domain. Some had more, like two or three,” Jamali explained.
The mechanics of comprehension
The team also tested if the neurons were triggered by the mere sound of a word or by its meaning. “Apart from the gibberish words, another control we used in the study was homophones,” Jamali says. The idea was to test if the neurons responded differently to the word “sun” and the word “son,” for example.
It turned out that the response changed based on context. When the sentence made it clear the word referred to a star, the sound triggered neurons triggered by weather phenomena. When it was clear that the same sound referred to a person, it triggered neurons responsible for relatives. “We also presented the same words at random without any context and found that it didn’t elicit as strong a response as when the context was available,” Jamali claims.
But the language processing in our brains will need to involve more than just different semantic categories being processed by different groups of neurons.
“There are many unanswered questions in linguistic processing. One of them is how much a structure matters, the syntax. Is it represented by a distributed network, or can we find a subset of neurons that encode structure rather than meaning?” Jamali asked. Another thing his team wants to study is what the neural processing looks like during speech production, in addition to comprehension. “How are those two processes related in terms of brain areas and the way the information is processed,” Jamali adds.
The last thing—and according to Jamali the most challenging thing—is using the Neuropixel probes to see how information is processed across different layers of the brain. “The Neuropixel probe travels through the depths of the cortex, and we can look at the neurons along the electrode and say like, ‘OK, the information from this layer, which is responsible for semantics, goes to this layer, which is responsible for something else.’ We want to learn how much information is processed by each layer. This should be challenging, but it would be interesting to see how different areas of the brain are involved at the same time when presented with linguistic stimuli,” Jamali concludes.
The hydra is a Lovecraftian-looking microorganism with a mouth surrounded by tentacles on one end, an elongated body, and a foot on the other end. It has no brain or centralized nervous system. Despite the lack of either of those things, it can still feel hunger and fullness. How can these creatures know when they are hungry and realize when they have had enough?
While they lack brains, hydra do have a nervous system. Researchers from Kiel University in Germany found they have an endodermal (in the digestive tract) and ectodermal (in the outermost layer of the animal) neuronal population, both of which help them react to food stimuli. Ectodermal neurons control physiological functions such as moving toward food, while endodermal neurons are associated with feeding behavior such as opening the mouth—which also vomits out anything indigestible.
Even such a limited nervous system is capable of some surprisingly complex functions. Hydras might even give us some insights into how appetite evolved and what the early evolutionary stages of a central nervous system were like.
No, thanks, I’m full
Before finding out how the hydra’s nervous system controls hunger, the researchers focused on what causes the strongest feeling of satiety, or fullness, in the animals. They were fed with the brine shrimp Artemia salina, which is among their usual prey, and exposed to the antioxidant glutathione. Previous studies have suggested that glutathione triggers feeding behavior in hydras, causing them to curl their tentacles toward their mouths as if they are swallowing prey.
Hydra fed with as much Artemia as they could eat were given glutathione afterward, while the other group was only given only glutathione and no actual food. Hunger was gauged by how fast and how often they opened their mouths.
It turned out that the first group, which had already glutted themselves on shrimp, showed hardly any response to glutathione eight hours after being fed. Their mouths barely opened—and slowly if so—because they were not hungry enough for even a feeding trigger like glutathione to make them feel they needed seconds.
It was only at 14 hours post-feeding that the hydra that had eaten shrimp opened their mouths wide enough and fast enough to indicate hunger. However, those that were not fed and only exposed to glutathione started showing signs of hunger only four hours after exposure. Mouth opening was not the only behavior provoked by hunger since starved animals also somersaulted through the water and moved toward light, behaviors associated with searching for food. Sated animals would stop somersaulting and cling to the wall of the tank they were in until they were hungry again.
Food on the “brain”
After observing the behavioral changes in the hydra, the research team looked into the neuronal activity behind those behaviors. They focused on two neuronal populations, the ectodermal population known as N3 and the endodermal population known as N4, both known to be involved in hunger and satiety. While these had been known to influence hydra feeding responses, how exactly they were involved was unknown until now.
Hydra have N3 neurons all over their bodies, especially in the foot. Signals from these neurons tell the animal that it has eaten enough and is experiencing satiety. The frequency of these signals decreased as the animals grew hungrier and displayed more behaviors associated with hunger. The frequency of N3 signals did not change in animals that were only exposed to glutathione and not fed, and these hydra behaved just like animals that had gone without food for an extended period of time. It was only when they were given actual food that the N3 signal frequency increased.
“The ectodermal neuronal population N3 is not only responding to satiety by increasing neuronal activity, but is also controlling behaviors that changed due to feeding,” the researchers said in their study, which was recently published in Cell Reports.
Though N4 neurons were only seen to communicate indirectly with the N3 population in the presence of food, they were found to influence eating behavior by regulating how wide the hydras opened their mouths and how long they kept them open. Lower frequency of N4 signals was seen in hydra that were starved or only exposed to glutathione. Higher frequency of N4 signals were associated with the animals keeping their mouths shut.
So, what can the neuronal activity of a tiny, brainless creature possibly tell us about the evolution of our own complex brains?
The researchers think the hydra’s simple nervous system may parallel the much more complex central and enteric (in the gut) nervous systems that we have. While N3 and N4 operate independently, there is still some interaction between them. The team also suggests that the way N4 regulates the hydra’s eating behavior is similar to the way the digestive tracts of mammals are regulated.
“A similar architecture of neuronal circuits controlling appetite/satiety can be also found in mice where enteric neurons, together with the central nervous system, control mouth opening,” they said in the same study.
Enlarge/ The spliceosome is a large complex of proteins and RNAs.
Almost 1,500 genes have been implicated in intellectual disabilities; yet for most people with such disabilities, genetic causes remain unknown. Perhaps this is in part because geneticists have been focusing on the wrong stretches of DNA when they go searching. To rectify this, Ernest Turro—a biostatistician who focuses on genetics, genomics, and molecular diagnostics—used whole genome sequencing data from the 100,000 Genomes Project to search for areas associated with intellectual disabilities.
His lab found a genetic association that is the most common one yet to be associated with neurodevelopmental abnormality. And the gene they identified doesn’t even make a protein.
Trouble with the spliceosome
Most genes include instructions for how to make proteins. That’s true. And yet human genes are not arranged linearly—or rather, they are arranged linearly, but not contiguously. A gene containing the instructions for which amino acids to string together to make a particular protein—hemoglobin, insulin, serotonin, albumin, estrogen, whatever protein you like—is modular. It contains part of the amino acid sequence, then it has a chunk of DNA that is largely irrelevant to that sequence, then a bit more of the protein’s sequence, then another chunk of random DNA, back and forth until the end of the protein. It’s as if each of these prose paragraphs were separated by a string of unrelated letters (but not a meaningful paragraph from a different article).
In order to read this piece through coherently, you’d have to take out the letters interspersed between its paragraphs. And that’s exactly what happens with genes. In order to read the gene through coherently, the cell has machinery that splices out the intervening sequences and links up the protein-making instructions into a continuous whole. (This doesn’t happen in the DNA itself; it happens to an RNA copy of the gene.) The cell’s machinery is obviously called the spliceosome.
There are about a hundred proteins that comprise the spliceosome. But the gene just found to be so strongly associated with neurodevelopmental disorders doesn’t encode any of them. Rather, it encodes one of five RNA molecules that are also part of the spliceosome complex and interact with the RNAs that are being spliced. Mutations in this gene were found to be associated with a syndrome with symptoms that include intellectual disability, seizures, short stature, neurodevelopmental delay, drooling, motor delay, hypotonia (low muscle tone), and microcephaly (having a small head).
Supporting data
The researchers buttressed their finding by examining three other databases; in all of them, they found more people with the syndrome who had mutations in this same gene. The mutations occur in a remarkably conserved region of the genome, suggesting that it is very important. Most of the mutations were new in the affected people—i.e. not inherited from their parents—but there was one case of one particular mutation in the gene that was inherited. Based on this, the researchers concluded that this particular variant may cause a less severe disorder than the other mutations.
Many studies that look for genes associated with diseases have focused on searching catalogs of protein coding genes. These results suggest that we could have been missing important mutations because of this focus.
Enlarge/ The Colorado River toad, also known as the Sonoran Desert Toad.
It is becoming increasingly accepted that classic psychedelics like LSD, psilocybin, ayahuasca, and mescaline can act as antidepressants and anti-anxiety treatments in addition to causing hallucinations. They act by binding to a serotonin receptor. But there are 14 known types of serotonin receptors, and most of the research into these compounds has focused on only one of them—the one these molecules like, called 5-HT2A. (5-HT, short for 5-hydroxytryptamine, is the chemical name for serotonin.)
The Colorado River toad (Incilius alvarius), also known as the Sonoran Desert toad, secretes a psychedelic compound that likes to bind to a different serotonin receptor subtype called 5-HT1A. And that difference may be the key to developing an entirely distinct class of antidepressants.
Uncovering novel biology
Like other psychedelics, the one the toad produces decreases depression and anxiety and induces meaningful and spiritually significant experiences. It has been used clinically to treat vets with post-traumatic stress disorder and is being developed as a treatment for other neurological disorders and drug abuse. 5-HT1A is a validated therapeutic target, as approved drugs, including the antidepressant Viibryd and the anti-anxiety med Buspar, bind to it. But little is known about how psychedelics engage with this receptor and which effects it mediates, so Daniel Wacker’s lab decided to look into it.
The researchers started by making chemical modifications to the frog psychedelic and noting how each of the tweaked molecules bound to both 5-HT2A and 5-HT1A. As a group, these psychedelics are known as “designer tryptamines”—that’s tryp with a “y”, mind you—because they are metabolites of the amino acid tryptophan.
The lab made 10 variants and found one that is more than 800-fold selective about sticking to 5-HT1A as compared to 5-HT2A. That makes it a great research tool for elucidating the structure-activity relationship of the 5-HT1A receptor, as well as the molecular mechanisms behind the pharmacology of the drugs on the market that bind to it. The lab used it to explore both of those avenues. However, the variant’s ultimate utility might be as a new therapeutic for psychiatric disorders, so they tested it in mice.
Improving the lives of mice
The compound did not induce hallucinations in mice, as measured by the “head-twitch response.” But it did alleviate depression, as measured by a “chronic social defeat stress model.” In this model, for 10 days in a row, the experimental mouse was introduced to an “aggressor mouse” for “10-minute defeat bouts”; essentially, it got beat up by a bully at recess for two weeks. Understandably, after this experience, the experimental mouse tended not to be that friendly with new mice, as controls usually are. But when injected with the modified toad psychedelic, the bullied mice were more likely to interact positively with new mice they met.
Depressed mice, like depressed people, also suffer from anhedonia: a reduced ability to experience pleasure. In mice, this manifests in not taking advantage of drinking sugar water when given the opportunity. But treated bullied mice regained their preference for the sweet drink. About a third of mice seem to be “stress-resilient” in this model; the bullying doesn’t seem to phase them. The drug increased the number of resilient mice.
The 5-HT2A receptor has hogged all of the research love because it mediates the hallucinogenic effects of many popular psychedelics, so people assumed that it must mediate their therapeutic effects, too. However, Wacker argues that there is little evidence supporting this assumption. Wacker’s new toad-based psychedelic variant and its preference for the 5-HT1A receptor will help elucidate the complementary roles these two receptor subtypes play in mediating the cellular and psychological effects of psychedelic molecules. And it might provide the basis for a new tryptamine-based mental health treatment as well—one without hallucinatory side effects, disappointing as that may be to some.