Biology

how-should-we-treat-beings-that-might-be-sentient?

How should we treat beings that might be sentient?


Being aware of the maybe self-aware

A book argues that we’ve not thought enough about things that might think.

What rights should a creature with ambiguous self-awareness, like an octopus, be granted. Credit: A. Martin UW Photography

If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022—a law that protects animals whose sentience status is unclear.

According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.

Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.

A framework for fence-sitters

Non-human animals, of course, are not the only beings with an ambiguous sentience stature that poses complicated questions. Birch discusses people with disorders of consciousness, embryos and fetuses, neural organoids (brain tissue grown in a dish), and even “AI technologies that reproduce brain functions and/or mimic human behavior,” all of which share the unenviable position of being perched on the edge of sentience—a place where it is excruciatingly unclear whether or not these individuals are capable of conscious experience.

What’s needed, Birch argues, when faced with such staggering uncertainty about the sentience stature of other beings, is a precautionary framework that outlines best practices for decision-making regarding their care. And in The Edge of Sentience, he provides exactly that, in meticulous, orderly detail.

Over more than 300 pages, he outlines three fundamental framework principles and 26 specific case proposals about how to handle complex situations related to the care and treatment of sentience-edgers. For example, Proposal 2 cautions that “a patient with a prolonged disorder of consciousness should not be assumed incapable of experience” and suggests that medical decisions made on their behalf cautiously presume they are capable of feeling pain. Proposal 16 warns about conflating brain size, intelligence, and sentience, and recommends decoupling the three so that we do not incorrectly assume that small-brained animals are incapable of conscious experience.

Surgeries and stem cells

Be forewarned, some topics in The Edge of Sentience are difficult. For example, Chapter 10 covers embryos and fetuses. In the 1980s, Birch shares, it was common practice to not use anesthesia on newborn babies or fetuses when performing surgery. Why? Because whether or not newborns and fetuses experience pain was up for debate. Rather than put newborns and fetuses through the risks associated with anesthesia, it was accepted practice to give them a paralytic (which prevents all movement) and carry on with invasive procedures, up to and including heart surgery.

After parents raised alarms over the devastating outcomes of this practice, such as infant mortality, it was eventually changed. Birch’s takeaway message is clear: When in doubt about the sentience stature of a living being, we should probably assume it is capable of experiencing pain and take all necessary precautions to prevent it from suffering. To presume the opposite can be unethical.

This guidance is repeated throughout the book. Neural organoids, discussed in Chapter 11, are mini-models of brains developed from stem cells. The potential for scientists to use neural organoids to unravel the mechanisms of debilitating neurological conditions—and to avoid invasive animal research while doing so—is immense. It is also ethical, Birch posits, since studying organoids lessens the suffering of research animals. However, we don’t yet know whether or not neural tissue grown in a dish has the potential to develop sentience, so he argues that we need to develop a precautionary approach that balances the benefits of reduced animal research against the risk that neural organoids are capable of being sentient.

A four-pronged test

Along this same line, Birch says, all welfare decisions regarding sentience-edgers require an assessment of proportionality. We must balance the nature of a given proposed risk to a sentience candidate with potential harms that could result if nothing is done to minimize the risk. To do this, he suggests testing four criteria: permissibility-in-principle, adequacy, reasonable necessity, and consistency. Birch refers to this assessment process as PARC, and deep dives into its implementation in chapter eight.

When applying the PARC criteria, one begins by testing permissibility-in-principle: whether or not the proposed response to a risk is ethically permissible. To illustrate this, Birch poses a hypothetical question: would it be ethically permissible to mandate vaccination in response to a pandemic? If a panel of citizens were in charge of answering this question, they might say “no,” because forcing people to be vaccinated feels unethical. Yet, when faced with the same question, a panel of experts might say “yes,” because allowing people to die who could be saved by vaccination also feels unethical. Gauging permissibility-in-principle, therefore, entails careful consideration of the likely possible outcomes of a proposed response. If an outcome is deemed ethical, it is permissible.

Next, the adequacy of a proposed response must be tested. A proportionate response to a risk must do enough to lessen the risk. This means the risk must be reduced to “an acceptable level” or, if that’s not possible, a response should “deliver the best level of risk reduction that can be achieved” via an ethically permissible option.

The third test is reasonable necessity. A proposed response to a risk must not overshoot—it should not go beyond what is reasonably necessary to reduce risk, in terms of either cost or imposed harm. And last, consistency should be considered. The example Birch presents is animal welfare policy. He suggests we should always “aim for taxonomic consistency: our treatment of one group of animals (e.g., vertebrates) should be consistent with our treatment of another (e.g., invertebrates).”

The Edge of Sentience, as a whole, is a dense text overflowing with philosophical rhetoric. Yet this rhetoric plays a crucial role in the storytelling: it is the backbone for Birch’s clear and organized conclusions, and it serves as a jumping-off point for the logical progression of his arguments. Much like “I think, therefore I am” gave René Descartes a foundation upon which to build his idea of substance dualism, Birch uses the fundamental position that humans should not inflict gratuitous suffering onto fellow creatures as a base upon which to build his precautionary framework.

For curious readers who would prefer not to wade too deeply into meaty philosophical concepts, Birch generously provides a shortcut to his conclusions: a cheat sheet of his framework principles and special case proposals is presented at the front of the book.

Birch’s ultimate message in The Edge of Sentience is that a massive shift in how we view beings with a questionable sentience status should be made. And we should ideally make this change now, rather than waiting for scientific research to infallibly determine who and what is sentient. Birch argues that one way that citizens and policy-makers can begin this process is by adopting the following decision-making framework: always avoid inflicting gratuitous suffering on sentience candidates; take precautions when making decisions regarding a sentience candidate; and make proportional decisions about the care of sentience candidates that are “informed, democratic and inclusive.”

You might be tempted to shake your head at Birch’s confidence in humanity. No matter how deeply you agree with his stance of doing no harm, it’s hard to have confidence in humanity given our track record of not making big changes for the benefit of living creatures, even when said creatures includes our own species (cue in global warming here). It seems excruciatingly unlikely that the entire world will adopt Birch’s rational, thoughtful, comprehensive plan for reducing the suffering of all potentially sentient creatures. Yet Birch, a philosopher at heart, ignores human history and maintains a tone of articulate, patient optimism. He clearly believes in us—he knows we can do better—and he offers to hold our hands and walk us through the steps to do so.

Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.

How should we treat beings that might be sentient? Read More »

tweaking-non-neural-brain-cells-can-cause-memories-to-fade

Tweaking non-neural brain cells can cause memories to fade


Neurons and a second cell type called an astrocyte collaborate to hold memories.

Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke

“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.

Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.

One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.

Marking star cells

Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.

“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.

Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.

“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.

The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.

Modulating engram neurons

“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.

“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.

This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.

But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.

Selectively forgetting

The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.

So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.

The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.

After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.

“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.

The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.

Nature, 2024.  DOI: 10.1038/s41586-024-08170-w

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Tweaking non-neural brain cells can cause memories to fade Read More »

this-elephant-figured-out-how-to-use-a-hose-to-shower

This elephant figured out how to use a hose to shower

And the hose-showering behavior was “lateralized,” that is, Mary preferred targeting her left body side more than her right. (Yes, Mary is a “left-trunker.”) Mary even adapted her showering behavior depending on the diameter of the hose: she preferred showering with a 24-mm hose over a 13-mm hose and preferred to use her trunk to shower rather than a 32-mm hose.

It’s not known where Mary learned to use a hose, but the authors suggest that elephants might have an intuitive understanding of how hoses work because of the similarity to their trunks. “Bathing and spraying themselves with water, mud, or dust are very common behaviors in elephants and important for body temperature regulation as well as skin care,” they wrote. “Mary’s behavior fits with other instances of tool use in elephants related to body care.”

Perhaps even more intriguing was Anchali’s behavior. While Anchali did not use the hose to shower, she nonetheless exhibited complex behavior in manipulating the hose: lifting it, kinking the hose, regrasping the kink, and compressing the kink. The latter, in particular, often resulted in reduced water flow while Mary was showering. Anchali eventually figured out how to further disrupt the water flow by placing her trunk on the hose and lowering her body onto it. Control experiments were inconclusive about whether Anchali was deliberately sabotaging Mary’s shower; the two elephants had been at odds and behaved aggressively toward each other at shower times. But similar cognitively complex behavior has been observed in elephants.

“When Anchali came up with a second behavior that disrupted water flow to Mary, I became pretty convinced that she is trying to sabotage Mary,” Brecht said. “Do elephants play tricks on each other in the wild? When I saw Anchali’s kink and clamp for the first time, I broke out in laughter. So, I wonder, does Anchali also think this is funny, or is she just being mean?

Current Biology, 2024. DOI: 10.1016/j.cub.2024.10.017  (About DOIs).

This elephant figured out how to use a hose to shower Read More »

fungi-may-not-think,-but-they-can-communicate

Fungi may not think, but they can communicate

Because the soil layer was so thin, most hyphae, which usually grow and spread underground by releasing spores, were easily seen, giving the researchers an opportunity to observe where connections were being made in the mycelium. Early hyphal coverage was not too different between the X and circle formations. Later, each showed a strong hyphal network, which makes up the mycelium, but there were differences between them.

While the hyphal network was pretty evenly distributed around the circle, there were differences between the inner and outer blocks in the X arrangement. Levels of decay activity were determined by weighing the blocks before and after the incubation period, and decay was pretty even throughout the circle, but especially evident on the four outermost blocks of the X. The researchers suggest that there were more hyphal connections on those blocks for a reason.

“The outermost four blocks, which had a greater degree of connection, may have served as “outposts” for foraging and absorbing water and nutrients from the soil, facilitated by their greater hyphal connections,” they said in the same study.

Talk to me

Fungal mycelium experiences what’s called acropetal growth, meaning it grows outward in all directions from the center. Consistent with this, the hyphae started out growing outward from each block. But over time, the hyphae shifted to growing in the direction that would get them the most nutrients.

Why did it change? Here is where the team thinks communication comes in. Previous studies found electrical signals are transmitted through hyphae. These signals sync up after the hyphae connect into one huge mycelium, much like the signals transmitted among neurons in organisms with brains. Materials such as nutrients are also transferred throughout the network.

Fungi may not think, but they can communicate Read More »

bats-use-echolocation-to-make-mental-maps-for-navigation

Bats use echolocation to make mental maps for navigation

Bat maps

To evaluate the route each bat took to get back to the roost, the team used their simulations to measure the echoic entropy it experienced along the way. The field where the bats were released was a low echoic entropy area, so during those first few minutes when they were flying around they were likely just looking for some more distinct, higher entropy landmarks to figure out where they were. Once they were oriented, they started flying to the roost, but not in a straight line. They meandered a bit, and the groups with higher sensory deprivation tended to meander more.

The meandering, researchers suspect, was due to trouble the bats had with maintaining the steady path relying on echolocation alone. When they were detecting distinctive landmarks like a specific orchard, they corrected the course. Repeating the process eventually brought them to their roost.

But could this be landmark-based navigation? Or perhaps simple beaconing, where an animal locks onto something like a distant light and moves toward it?

The researchers argue in favor of cognitive acoustic maps. “I think if echolocation wasn’t such a limited sensory modality, we couldn’t reach a conclusion about the bats using cognitive acoustic maps,” Goldshtein says. The distance between landmarks the bats used to correct their flight path was significantly longer than echolocation’s sensing range. Yet they knew which direction the roost was relative to one landmark, even when the next landmark on the way was acoustically invisible. You can’t do that without having the area mapped.

“It would be really interesting to understand how other bats do that, to compare between species,” Goldshtein says. There are bats that fly over a thousand meters above the ground, so they simply can’t sense any landmarks using echolocation. Other species hunt over sea, which, as per this team’s simulations, would be just one huge low-entropy area. “We are just starting. That’s why I do not study only navigation but also housing, foraging, and other aspects of their behavior. I think we still don’t know enough about bats in general,” Goldshtein claims.

Science, 2024.  DOI: 10.1126/science.adn6269

Bats use echolocation to make mental maps for navigation Read More »

these-hornets-break-down-alcohol-so-fast-that-they-can’t-get-drunk

These hornets break down alcohol so fast that they can’t get drunk

Many animals, including humans, have developed a taste for alcohol in some form, but excessive consumption often leads to adverse health effects. One exception is the Oriental wasp. According to a new paper published in the Proceedings of the National Academy of Sciences, these wasps can guzzle seemingly unlimited amounts of ethanol regularly and at very high concentrations with no ill effects—not even intoxication. They pretty much drank honeybees used in the same experiments under the table.

“To the best of our knowledge, Oriental hornets are the only animal in nature adapted to consuming alcohol as a metabolic fuel,” said co-author Eran Levin of Tel Aviv University. “They show no signs of intoxication or illness, even after chronically consuming huge amounts of alcohol, and they eliminate it from their bodies very quickly.”

Per Levin et al., there’s a “drunken monkey” theory that predicts that certain animals well-adapted to low concentrations of ethanol in their diets nonetheless have adverse reactions at higher concentrations. Studies have shown that tree shrews, for example, can handle concentrations of up to 3.8 percent, but in laboratory conditions, when they consumed ethanol in concentrations of 10 percent or higher, they were prone to liver damage.

Similarly, fruit flies are fine with concentrations up to 4 percent but have increased mortality rates above that range. They’re certainly capable of drinking more: fruit flies can imbibe half their body volume in 15 percent (30 proof) alcohol each day. Not even spiking the ethanol with bitter quinine slows them down. Granted, they have ultra-fast metabolisms—the better to burn off the booze—but they can still become falling-down drunk. And fruit flies vary in their tolerance for alcohol depending on their genetic makeup—that is, how quickly their bodies adapt to the ethanol, requiring them to inhale more and more of it to achieve the same physical effects, much like humans.

These hornets break down alcohol so fast that they can’t get drunk Read More »

a-candy-engineer-explains-the-science-behind-the-snickers-bar

A candy engineer explains the science behind the Snickers bar

It’s Halloween. You’ve just finished trick-or-treating and it’s time to assess the haul. You likely have a favorite, whether it’s chocolate bars, peanut butter cups, those gummy clusters with Nerds on them, or something else.

For some people, including me, one piece stands out—the Snickers bar, especially if it’s full-size. The combination of nougat, caramel, and peanuts coated in milk chocolate makes Snickers a popular candy treat.

As a food engineer studying candy and ice cream at the University of Wisconsin-Madison, I now look at candy in a whole different way than I did as a kid. Back then, it was all about shoveling it in as fast as I could.

Now, as a scientist who has made a career studying and writing books about confections, I have a very different take on candy. I have no trouble sacrificing a piece for the microscope or the texture analyzer to better understand how all the components add up. I don’t work for, own stock in, or receive funding from Mars Wrigley, the company that makes Snickers bars. But in my work, I do study the different components that make up lots of popular candy bars. Snickers has many of the most common elements you’ll find in your Halloween candy.

Let’s look at the elements of a Snickers bar as an example of candy science. As with almost everything, once you get into it, each component is more complex than you might think.

Snickers bars contain a layer of nougat, a layer of caramel mixed with peanuts, and a chocolate coating.

Credit: istarif/iStock via Getty Images

Snickers bars contain a layer of nougat, a layer of caramel mixed with peanuts, and a chocolate coating. Credit: istarif/iStock via Getty Images

Airy nougat

Let’s start with the nougat. The nougat in a Snickers bar is a slightly aerated candy with small sugar crystals distributed throughout.

One of the ingredients in the nougat is egg white, a protein that helps stabilize the air bubbles that provide a light texture. Often, nougats like this are made by whipping sugar and egg whites together. The egg whites coat the air bubbles created during whipping, which gives the nougat its aerated texture.

A boiled sugar syrup is then slowly mixed into the egg white sugar mixture, after which a melted fat is added. Since fat can cause air bubbles to collapse, this step has to be done last and very carefully.

A candy engineer explains the science behind the Snickers bar Read More »

how-can-you-write-data-to-dna-without-changing-the-base-sequence?

How can you write data to DNA without changing the base sequence?

The developers of the system call each of these potentially modifiable spots on the template an epi-bit, with the modified version corresponding to a 1 in a conventional computer bit and the unmodified version corresponding to a 0. Because no synthesis is required, multiple bits can be written simultaneously. To read the information, the scientists rigged the system so that 1s fluoresce and 0s don’t. The fluorescence, along with the sequences of bases, was read as the DNA was passed through a tiny pore.

Pictures in a meta-genome

Using this system, Zhang et al. created five DNA templates and 175 bricks to record 350 bits at a time. Using a collection of tagged template molecules, the researchers could store and read roughly 275,000 bits, including a color picture of a panda’s face and a rubbing of a tiger from the Han dynasty, which ruled China from 202 BCE to 220 CE.

They then had 60 student volunteers “with diverse academic backgrounds” store texts of their choice in epi-bits using a simple kit in a classroom. Twelve of the 15 stored texts were read successfully.

We’re not quite ready for your cat videos yet, though. There are still errors in the printing and reading steps, and since these modifications don’t survive when DNA is copied, making additional versions of the stored information may get complicated. Plus, the stability of these modifications under different storage conditions remains unknown, although the authors note that their epi-bits stayed stable at temperatures of up to 95o° C.

But once these and a few other problems are solved—and the technology is scaled up, further optimized and automated, and/or tweaked to accommodate other types of epigenetic modifications—it will be a clever and novel way to harness natural data storage methods for our needs.

Nature, 2024.  DOI: 10.1038/s41586-024-08040-5

How can you write data to DNA without changing the base sequence? Read More »

bizarre-fish-has-sensory-“legs”-it-uses-for-walking-and-tasting

Bizarre fish has sensory “legs” it uses for walking and tasting

Finding out what controls the formation of sensory legs meant growing sea robins from eggs. The research team observed that the legs of sea robins develop from the three pectoral fin rays that are around the stomach area of the fish, then separate from the fin as they continue to develop. Among the most active genes in the developing legs is the transcription factor (a protein that binds to DNA and turns genes on and off) known as tbx3a. When genetically engineered sea robins had tbx3a edited out with CRISPR-Cas9, it resulted in fewer legs, deformed legs, or both.

“Disruption of tbx3a results in upregulation of pectoral fin markers prior to leg separation, indicating that leg rays become more similar to fins in the absence of tbx3a,” the researchers said in a second study, also published in Current Biology.

To see whether genes for sensory legs are a dominant feature, the research team also tried creating sea robin hybrids, crossing species with and without sensory legs. This resulted in offspring with legs that had sensory capabilities, indicating that it’s a genetically dominant trait.

Exactly why sea robins evolved the way they did is still unknown, but the research team came up with a hypothesis. They think the legs of sea robin ancestors were originally intended for locomotion, but they gradually started gaining some sensory utility, allowing the animal to search the visible surface of the seafloor for food. Those fish that needed to search deeper for food developed sensory legs that allowed them to taste and dig for hidden prey.

“Future work will leverage the remarkable biodiversity of sea robins to understand the genetic basis of novel trait formation and diversification in vertebrates,” the team also said in the first study. “Our work represents a basis for understanding how novel traits evolve.”

Current Biology, 2024. DOI:  10.1016/j.cub.2024.08.014, 10.1016/j.cub.2024.08.042

Bizarre fish has sensory “legs” it uses for walking and tasting Read More »

dna-confirms-these-19th-century-lions-ate-humans

DNA confirms these 19th century lions ate humans

For several months in 1898, a pair of male lions turned the Tsavo region of Kenya into their own human hunting grounds, killing many construction workers who were building the Kenya-Uganda railway.  A team of scientists has now identified exactly what kinds of prey the so-called “Tsavo Man-Eaters” fed upon, based on DNA analysis of hairs collected from the lions’ teeth, according to a recent paper published in the journal Current Biology. They found evidence of various species the lions had consumed, including humans.

The British began construction of a railway bridge over the Tsavo River in March 1898, with Lieutenant-Colonel John Henry Patterson leading the project. But mere days after Patterson arrived on site, workers started disappearing or being killed. The culprits: two maneless male lions, so emboldened that they often dragged workers from their tents at night to eat them. At their peak, they were killing workers almost daily—including an attack on the district officer, who narrowly escaped with claw lacerations on his back. (His assistant, however, was killed.)

Patterson finally managed to shoot and kill one of the lions on December 9 and the second 20 days later. The lion pelts decorated Patterson’s home as rugs for 25 years before being sold to Chicago’s Field Museum of Natural History in 1924. The skins were restored and used to reconstruct the lions, which are now on permanent display at the museum, along with their skulls.

Tale of the teeth

The Tsavo Man-Eaters naturally fascinated scientists, although the exact number of people they killed and/or consumed remains a matter of debate. Estimates run anywhere from 28–31 victims to 100 or more, with a 2009 study that analyzed isotopic signatures of the lions’ bone collagen and hair keratin favoring the lower range.

DNA confirms these 19th century lions ate humans Read More »

octopus-suckers-inspire-new-tech-for-gripping-objects-underwater

Octopus suckers inspire new tech for gripping objects underwater

Over the last few years, Virginia Tech scientists have been looking to the octopus for inspiration to design technologies that can better grip a wide variety of objects in underwater environments. Their latest breakthrough is a special switchable adhesive modeled after the shape of the animal’s suckers, according to a new paper published in the journal Advanced Science.

“I am fascinated with how an octopus in one moment can hold something strongly, then release it instantly. It does this underwater, on objects that are rough, curved, and irregular—that is quite a feat,” said co-author and research group leader Michael Bartlett. “We’re now closer than ever to replicating the incredible ability of an octopus to grip and manipulate objects with precision, opening up new possibilities for exploration and manipulation of wet or underwater environments.”

As previously reported, there are several examples in nature of efficient ways to latch onto objects in underwater environments, per the authors. Mussels, for instance, secrete adhesive proteins to attach themselves to wet surfaces, while frogs have uniquely structured toe pads that create capillary and hydrodynamic forces for adhesion. But cephalopods like the octopus have an added advantage: The adhesion supplied by their grippers can be quickly and easily reversed, so the creatures can adapt to changing conditions, attaching to wet and dry surfaces.

From a mechanical engineering standpoint, the octopus has an active, pressure-driven system for adhesion. The sucker’s wide outer rim creates a seal with the object via a pressure differential between the chamber and the surrounding medium. Then muscles (serving as actuators) contract and relax the cupped area behind the rim to add or release pressure as needed.

There have been several attempts to mimic cephalopods when designing soft robotic grippers, for example. Back in 2022, Bartlett and his colleagues wanted to go one step further and recreate not just the switchable adhesion but also the integrated sensing and control. The result was Octa-Glove, a wearable system for gripping underwater objects that mimicked the arm of an octopus.

Improving the Octa-Glove

Grabbing and releasing underwater objects of different sizes and shapes with an octopus-inspired adhesive. Credit: Chanhong Lee and Michael Bartlett

For the adhesion, they designed silicone stalks capped with a pneumatically controlled membrane, mimicking the structure of octopus suckers. These adhesive elements were then integrated with an array of LIDAR optical proximity sensors and a micro-control for the real-time detection of objects. When the sensors detect an object, the adhesion turns on, mimicking the octopus’s nervous and muscular systems. The team used a neoprene wetsuit glove as a base for the wearable glove, incorporating the adhesive elements and sensors in each finger, with flexible pneumatic tubes inserted at the base of the adhesive elements.

Octopus suckers inspire new tech for gripping objects underwater Read More »

medicine-nobel-goes-to-previously-unknown-way-of-controlling-genes

Medicine Nobel goes to previously unknown way of controlling genes

Based on the stereotypical hairpin structure, researchers have scanned genomes and found over 38,000 likely precursors; nearly 50,000 mature microRNAs have been discovered by sequencing all the RNA found in cells from a variety of species. While found widely in animals, they’ve also been discovered in plants, raising the possibility that they existed in a single-celled ancestral organism.

While some microRNA genes, including lin-4 and let-7, have dramatic phenotypes when mutated, many have weak or confusing effects. This is likely in part due to the fact that a single microRNA can bind to and regulate a variety of genes and so may have a mix of effects when mutated. In other cases, several different microRNAs may bind to the same messenger RNA, creating a redundancy that makes the loss of a single microRNA difficult to detect.

Nevertheless, there’s plenty of evidence that, collectively, they’re essential for normal development in many organisms and tissues. Knocking out the gene that encodes the Dicer protein, which is needed for forming mature microRNAs, causes early embryonic lethality. Knockouts of the gene in specific cell types cause a variety of defects. For example, B cells never mature if Dicer is lost in that cell lineage, and a knockout in nerve cells causes microcephaly and limiting branching of connections among neurons, leading the animals to die shortly after birth.

This being the Medicine prize, the Nobel Committee also cite a number of human genetic diseases that are caused by mutations in microRNA genes.

Overall, the award highlights just how complex life is at the cellular level. There’s a fair number of genes that have to be made by every cell simply to enable their survival. But as for the rest, they exist embedded in complex regulatory networks that interact to ensure that proteins are made only where and when they’re needed, and often degraded if they somehow get made anyway. And every now and then, fundamental research in an oddball species is still telling us unexpected things about those networks.

Medicine Nobel goes to previously unknown way of controlling genes Read More »