Biology

small-charges-in-water-spray-can-trigger-the-formation-of-key-biochemicals

Small charges in water spray can trigger the formation of key biochemicals

Once his team nailed how droplets become electrically charged and how the micro-lightning phenomenon works, they recreated the Miller-Urey experiment. Only without the spark plugs.

Ingredients of life

After micro-lightnings started jumping between droplets in a mixture of gases similar to that used by Miller and Urey, the team examined their chemical composition with a mass spectrometer. They confirmed glycine, uracil, urea, cyanoethylene, and lots of other chemical compounds were made. “Micro-lightnings made all organic molecules observed previously in the Miller-Urey experiment without any external voltage applied,” Zare claims.

But does it really bring us any closer to explaining the beginnings of life? After all, Miller and Urey already demonstrated those molecules could be produced by electrical discharges in a primordial Earth’s atmosphere—does it matter all that much where those discharges came from?  Zare argues that it does.

“Lightning is intermittent, so it would be hard for these molecules to concentrate. But if you look at waves crashing into rocks, you can think the spray would easily go into the crevices in these rocks,” Zare suggests. He suggests that the water in these crevices would evaporate, new spray would enter and evaporate again and again. The cyclic drying would allow the chemical precursors to build into more complex molecules. “When you go through such a dry cycle, it causes polymerization, which is how you make DNA,” Zare argues. Since sources of spray were likely common on the early Earth, Zare thinks this process could produce far more organic chemicals than potential alternatives like lightning strikes, hydrothermal vents, or impacting comets.

But even if micro-lightning really produced the basic building blocks of life on Earth, we’re still not sure how those combined into living organisms. “We did not make life. We just demonstrated a possible mechanism that gives us some chemical compounds you find in life,” Zare says. “It’s very important to have a lot of humility with this stuff.”

Science Advances, 2025.  DOI: 10.1126/sciadv.adt8979

Small charges in water spray can trigger the formation of key biochemicals Read More »

a-“biohybrid”-robotic-hand-built-using-real-human-muscle-cells

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

When we try to make the muscles thicker and therefore more powerful, cells buried deeper in those thicker structures are cut off from nutrients and oxygen, so they die, undergoing necrosis. In living organisms, this problem is solved by the vascular network. But building artificial vascular networks in lab-grown muscles is still something we can’t do very well. So, Takeuchi and his team had to find their way around the necrosis problem. Their solution was sushi rolling.

The team started by growing thin, flat muscle fibers arranged side by side on a petri dish. This gave all the cells access to nutrients and oxygen, so the muscles turned out robust and healthy. Once all the fibers were grown, Takeuchi and his colleagues rolled them into tubes called MuMuTAs (multiple muscle tissue actuators) like they were preparing sushi rolls. “MuMuTAs were created by culturing thin muscle sheets and rolling them into cylindrical bundles to optimize contractility while maintaining oxygen diffusion,” Takeuchi explains.

A “biohybrid” robotic hand built using real human muscle cells Read More »

in-one-dog-breed,-selection-for-utility-may-have-selected-for-obesity

In one dog breed, selection for utility may have selected for obesity

High-risk Labradors also tended to pester their owners for food more often. Dogs with low genetic risk scores, on the other hand, stayed slim regardless of whether the owners paid attention to how and whether they were fed or not.

But other findings proved less obvious. “We’ve long known chocolate-colored Labradors are prone to being overweight, and I’ve often heard people say that’s because they’re really popular as pets for young families with toddlers that throw food on the floor all the time and where dogs are just not given that much attention,” Raffan says. Her team’s data showed that chocolate Labradors actually had a much higher genetic obesity risk than yellow or black ones

Some of the Labradors particularly prone to obesity, the study found, were guide dogs, which were included in the initial group. Training a guide dog in the UK usually takes around two years, during which the dogs learn multiple skills, like avoiding obstacles, stopping at curbs, navigating complex environments, and responding to emergency scenarios. Not all dogs are able to successfully finish this training, which is why guide dogs are often selectively bred with other guide dogs in the hope their offspring would have a better chance at making it through the same training.

But it seems that this selective breeding among guide dogs might have had unexpected consequences. “Our results raise the intriguing possibility that we may have inadvertently selected dogs prone to obesity, dogs that really like their food, because that makes them a little bit more trainable. They would do anything for a biscuit,” Raffan says.

The study also found that genes responsible for obesity in dogs are also responsible for obesity in humans. “The impact high genetic risk has on dogs leads to increased appetite. It makes them more interested in food,” Raffan claims. “Exactly the same is true in humans. If you’re at high genetic risk you aren’t inherently lazy or rubbish about overeating—it’s just you are more interested in food and get more reward from it.”

Science, 2025.  DOI: 10.1126/science.ads2145

In one dog breed, selection for utility may have selected for obesity Read More »

how-whale-urine-benefits-the-ocean-ecosystem

How whale urine benefits the ocean ecosystem

A “great whale conveyor belt”

illustration showing how whale urine spreads throughout the ocean ecosystem

Credit: A. Boersma

Migrating whales typically gorge in summers at higher latitudes to build up energy reserves to make the long migration to lower latitudes. It’s still unclear exactly why the whales migrate, but it’s likely that pregnant females in particular find it more beneficial to give birth and nurse their young in warm, shallow, sheltered areas—perhaps to protect their offspring from predators like killer whales. Warmer waters also keep the whale calves warm as they gradually develop their insulating layers of blubber. Some scientists think that whales might also migrate to molt their skin in those same warm, shallow waters.

Roman et al. examined publicly available spatial data for whale feeding and breeding grounds, augmented with sightings from airplane and ship surveys to fill in gaps in the data, then fed that data into their models for calculating nutrient transport. They focused on six species known to migrate seasonally over long distances from higher latitudes to lower latitudes: blue whales, fin whales, gray whales, humpback whales, and North Atlantic and southern right whales.

They found that whales can transport some 4,000 tons of nitrogen each year during their migrations, along with 45,000 tons of biomass—and those numbers could have been three times larger in earlier eras before industrial whaling depleted populations. “We call it the ‘great whale conveyor belt,’” Roman said. “It can also be thought of as a funnel, because whales feed over large areas, but they need to be in a relatively confined space to find a mate, breed, and give birth. At first, the calves don’t have the energy to travel long distances like the moms can.” The study did not include any effects from whales releasing feces or sloughing their skin, which would also contribute to the overall nutrient flux.

“Because of their size, whales are able to do things that no other animal does. They’re living life on a different scale,” said co-author Andrew Pershing, an oceanographer at the nonprofit organization Climate Central. “Nutrients are coming in from outside—and not from a river, but by these migrating animals. It’s super-cool, and changes how we think about ecosystems in the ocean. We don’t think of animals other than humans having an impact on a planetary scale, but the whales really do.” 

Nature Communications, 2025. DOI: 10.1038/s41467-025-56123-2  (About DOIs).

How whale urine benefits the ocean ecosystem Read More »

“wooly-mice”-a-test-run-for-mammoth-gene-editing

“Wooly mice” a test run for mammoth gene editing

On Tuesday, the team behind the plan to bring mammoth-like animals back to the tundra announced the creation of what it is calling wooly mice, which have long fur reminiscent of the woolly mammoth. The long fur was created through the simultaneous editing of as many as seven genes, all with a known connection to hair growth, color, and/or texture.

But don’t think that this is a sort of mouse-mammoth hybrid. Most of the genetic changes were first identified in mice, not mammoths. So, the focus is on the fact that the team could do simultaneous editing of multiple genes—something that they’ll need to be able to do to get a considerable number of mammoth-like changes into the elephant genome.

Of mice and mammoths

The team at Colossal Biosciences has started a number of de-extinction projects, including the dodo and thylacine, but its flagship project is the mammoth. In all of these cases, the plan is to take stem cells from a closely related species that has not gone extinct, and edit a series of changes based on the corresponding genomes of the deceased species. In the case of the mammoth, that means the elephant.

But the elephant poses a large number of challenges, as the draft paper that describes the new mice acknowledges. “The 22-month gestation period of elephants and their extended reproductive timeline make rapid experimental assessment impractical,” the researchers acknowledge. “Further, ethical considerations regarding the experimental manipulation of elephants, an endangered species with complex social structures and high cognitive capabilities, necessitate alternative approaches for functional testing.”

So, they turned to a species that has been used for genetic experiments for over a century: the mouse. We can do all sorts of genetic manipulations in mice, and have ways of using embryonic stem cells to get those manipulations passed on to a new generation of mice.

For testing purposes, the mouse also has a very significant advantage: mutations that change its fur are easy to spot. Over the century-plus that we’ve been using mice for research, people have noticed and observed a huge variety of mutations that affect their fur, altering color, texture, and length. In many of these cases, the changes in the DNA that cause these changes have been identified.

“Wooly mice” a test run for mammoth gene editing Read More »

ai-versus-the-brain-and-the-race-for-general-intelligence

AI versus the brain and the race for general intelligence


Intelligence, ±artificial

We already have an example of general intelligence, and it doesn’t look like AI.

There’s no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That’s gotten some people talking about the possibility that we’re on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.

Many arguments come down to the question of how AGI is defined, which people in the field can’t seem to agree upon. This contributes to estimates of its advent that range from “it’s practically here” to “we’ll never achieve it.” Given that range, it’s impossible to provide any sort of informed perspective on how close we are.

But we do have an existing example of AGI without the “A”—the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.

With all that in mind, let’s look at some of the things the brain does that current AI systems can’t.

Defining AGI might help

Artificial general intelligence hasn’t really been defined. Those who argue that it’s imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI’s arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the “G” of AGI and its implication of systems that are far less specialized.

But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short.

“I think that AGI would be something that is going to be more robust, more stable—not necessarily smarter in general but more coherent in its abilities,” said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. “You’d expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.”

“I think that’s a big distinction, this idea of generalizability,” echoed neuroscientist Christa Baker of NC State University. “You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it’s not like now you’re an idiot.”

Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.

Beyond those specific limits, Baker noted that “there’s long been this very human-centric idea of intelligence that only humans are intelligent.” That’s fallen away within the scientific community as we’ve studied more about animal behavior. But there’s still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language models

The fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.

These capabilities are complicated enough that it’s not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we’ve created so far.

Neurons vs. artificial neurons

Most current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.

After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.

While that system is modeled on the behavior of some structures within the brain, it’s a very limited approximation. For one, all artificial neurons are functionally equivalent—there’s no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.

In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.

Finally, while organized layers are a feature of a few structures in brains, they’re far from the rule. “What we found is it’s—at least in the fly—much more interconnected,” Baker told Ars. “You can’t really identify this strictly hierarchical network.”

With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are “finding lateral connections or feedback projections, or what we call recurrent loops, where we’ve got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate.”

While we’re only beginning to understand the functional consequences of all this complexity, it’s safe to say that it allows networks composed of actual neurons far more flexibility in how they process information—a flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.

But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we’ve talked about so far. They extend to significant differences in how these functional units are organized.

The brain isn’t monolithic

The neural networks we’ve generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.

To give a sense of what this looks like, let’s think about what’s going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.

Separately, there’s part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you’re engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.

The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehension—and requiring many of these systems to communicate among themselves.

As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there’s any emotional content to the material you’re reading.

All of these different areas are engaged without you being consciously aware of the need for them.

In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That’s in sharp contrast to a brain. “The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make,” Baker noted. “There already a lot of constraints and specifics that are already set up.”

Even in cases where it’s not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.

In contrast, pre-planned modularity is relatively new to the AI world. In software development, “This concept of modularity is well established, so we have the whole methodology around it, how to manage it,” Schain said, “it’s really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain.” There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.

None of this is saying that a modular system can’t arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there’s no reason to think modularity will be valuable.

There is some reason to believe that this modularity is key to the brain’s incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that’s not consistently the case; Baker noted that, “When you’re talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech.”

This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we’ll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.

The brain is constantly training

Current AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn’t absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they’re retained.

That may be starting to change a bit, Schain said. “There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates,” he told Ars. But it’s still the case that neural networks are essentially useless without an extended training period.

In contrast, a brain doesn’t have distinct learning and active states; it’s constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: “Once you have made your movement, the ball has left your hand, it’s going to land somewhere. So that visual signal—that comparison of where it landed versus where you wanted it to go—is what we call an error signal. That’s detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time.”

It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). “Even if you’re put into a situation where you’ve never been before, you can still figure it out,” Baker said. “If you see a new object, you don’t have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions.”

As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human’s performance doesn’t remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with “get off my lawn” would be indistinguishable.)

Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.

In contrast, it’s essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they’re presented as text. But here, there’s still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it’s best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.

Déjà vu

For Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, “memory” is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow “context window” that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on.

“For AI, it’s very basic: It’s like the memory is in the weights [of connections] or in the context. But with a human brain, it’s a much more sophisticated mechanism, still to be uncovered. It’s more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant.”

This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we’ve never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.

The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don’t really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it’s difficult to discuss at all. All we can really say is that there are clear differences there.

Facing limits

It’s difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it’s potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.

In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we’d find in a fly’s brain and have nowhere near the fly’s general capabilities.

It remains possible that there is more than one route to those general capabilities and that some offshoot of today’s AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we’ll run into a serious roadblock: We don’t fully understand the biology yet.

“I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has,” Baker said. “That’s just because we don’t even know how it gets it; we don’t know how that arises. So how do you build that into a system?”

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

AI versus the brain and the race for general intelligence Read More »

the-iss-is-nearly-as-microbe-free-as-an-isolation-ward

The ISS is nearly as microbe-free as an isolation ward

“One of the more similar environments to the ISS was in the isolation dorms on the UCSD campus during the COVID-19 pandemic. All surfaces were continuously sterilized, so that microbial signatures would be erased by the time another person would show up,” Benitez said. So, one of the first solutions to the ISS microbial diversity problem he and his colleagues suggested was that they perhaps should ease up on sterilizing the station so much.

“The extensive use of disinfection chemicals might not be the best approach to maintaining a healthy microbial environment, although there is certainly plenty of research to be conducted,” Benitez said.

Space-faring gardens

He suggested that introducing microbes that are beneficial to human health might be better than constantly struggling to wipe out all microbial life on the station. And while some modules up there do need to be sterilized, keeping some beneficial microbes alive could be achieved by designing future spacecraft in a way that accounts for how the microbes spread.

“We found that microbes in modules with little human activity tend to stay in those modules without spreading. When human activity is high in a module, then the microbes spread to adjacent modules,”  Zhao said. She said spacecraft could be designed to put modules with high human activity at one end and the modules with little to no human activity at the opposite end, so the busy modules don’t contaminate the ones that need to remain sterile. “We are of course talking as microbiologists and chemists—perhaps spacecraft engineers have more pressing reasons to put certain modules at certain spots,” Zhao said. “These are just preliminary ideas.”

But what about crewed deep space missions to Mars and other destinations in the Solar System? Should we carefully design the microbial composition beforehand, plant the microbes on the spacecraft and hope this artificial, closed ecosystem will work for years without any interventions from Earth?

“I’d take a more holistic ecosystem approach,” Benitez said. He imagines in the future we could build spacecraft and space stations hosting entire gardens with microbes that would interact with plants, pollinators, and animals to create balanced, self-sustaining ecosystems. “We’d not only need to think about sending the astronauts and the machines they need to function, but also about all other lifeforms we will need to send along with them,” Benitez said

Cell, 2025. DOI: 10.1016/j.cell.2025.01.039

The ISS is nearly as microbe-free as an isolation ward Read More »

flashy-exotic-birds-can-actually-glow-in-the-dark

Flashy exotic birds can actually glow in the dark

Found in the forests of Papua New Guinea, Indonesia, and Eastern Australia, birds of paradise are famous for flashy feathers and unusually shaped ornaments, which set the standard for haute couture among birds. Many use these feathers for flamboyant mating displays in which they shape-shift into otherworldly forms.

As if this didn’t attract enough attention, we’ve now learned that they also glow in the dark.

Biofluorescent organisms are everywhere, from mushrooms to fish to reptiles and amphibians, but few birds have been identified as having glowing feathers. This is why biologist Rene Martin of the University of Nebraska-Lincoln wanted to investigate. She and her team studied a treasure trove of specimens at the American Museum of Natural History, which have been collected since the 1800s, and found that 37 of the 45 known species of birds of paradise have feathers that fluoresce.

The glow factor of birds of paradise is apparently important for mating displays. Despite biofluorescence being especially prominent in males, attracting a mate might not be all it is useful for, as these birds might also use it to signal to each other in other ways and sometimes even for camouflage among the light and shadows.

“The current very limited number of studies reporting fluorescence in birds suggests this phenomenon has not been thoroughly investigated,” the researchers said in a study that was recently published in Royal Society Open Science.

Glow-up

How do they get that glow? Biofluorescence is a phenomenon that happens when shorter, high-energy wavelengths of light, meaning UV, violet, and blue, are absorbed by an organism. The energy then gets re-emitted at longer, lower-energy wavelengths—greens, yellows, oranges, and reds. The feathers of birds of paradise contain fluorophores, molecules that undergo biofluorescence. Specialized filters in the light-sensitive cells of their eyes make their visual system more sensitive to biofluorescence.

Flashy exotic birds can actually glow in the dark Read More »

study:-cuttlefish-adapt-camouflage-displays-when-hunting-prey

Study: Cuttlefish adapt camouflage displays when hunting prey

Crafty cuttlefish employ several different camouflaging displays while hunting their prey, according to a new paper published in the journal Ecology, including mimicking benign ocean objects like a leaf or coral, or flashing dark stripes down their bodies. And individual cuttlefish seem to choose different preferred hunting displays for different environments.

It’s well-known that cuttlefish and several other cephalopods can rapidly shift the colors in their skin thanks to that skin’s unique structure. As previously reported, squid skin is translucent and features an outer layer of pigment cells called chromatophores that control light absorption. Each chromatophore is attached to muscle fibers that line the skin’s surface, and those fibers, in turn, are connected to a nerve fiber. It’s a simple matter to stimulate those nerves with electrical pulses, causing the muscles to contract. And because the muscles are pulling in different directions, the cell expands, along with the pigmented areas, changing the color. When the cell shrinks, so do the pigmented areas.

Underneath the chromatophores, there is a separate layer of iridophores. Unlike the chromatophores, the iridophores aren’t pigment-based but are an example of structural color, similar to the crystals in the wings of a butterfly, except a squid’s iridophores are dynamic rather than static. They can be tuned to reflect different wavelengths of light. A 2012 paper suggested that this dynamically tunable structural color of the iridophores is linked to a neurotransmitter called acetylcholine. The two layers work together to generate the unique optical properties of squid skin.

And then there are leucophores, which are similar to the iridophores, except they scatter the full spectrum of light, so they appear white. They contain reflectin proteins that typically clump together into nanoparticles so that light scatters instead of being absorbed or directly transmitted. Leucophores are mostly found in cuttlefish and octopuses, but there are some female squid of the genus Sepioteuthis that have leucophores that they can “tune” to only scatter certain wavelengths of light. If the cells allow light through with little scattering, they’ll seem more transparent, while the cells become opaque and more apparent by scattering a lot more light.

Scientists learned in 2023 that the process by which cuttlefish generate their camouflage patterns is significantly more complex than scientists previously thought. Specifically, cuttlefish readily adapted their skin patterns to match different backgrounds, whether natural or artificial. And the creatures didn’t follow the same transitional pathway every time, often pausing in between. That means that contrary to prior assumptions, feedback seems to be critical to the process, and the cuttlefish were correcting their patterns to match the backgrounds better.

Study: Cuttlefish adapt camouflage displays when hunting prey Read More »

ai-used-to-design-a-multi-step-enzyme-that-can-digest-some-plastics

AI used to design a multi-step enzyme that can digest some plastics

And it worked. Repeating the same process with an added PLACER screening step boosted the number of enzymes with catalytic activity by over three-fold.

Unfortunately, all of these enzymes stalled after a single reaction. It turns out they were much better at cleaving the ester, but they left one part of it chemically bonded to the enzyme. In other words, the enzymes acted like part of the reaction, not a catalyst. So the researchers started using PLACER to screen for structures that could adopt a key intermediate state of the reaction. This produced a much higher rate of reactive enzymes (18 percent of them cleaved the ester bond), and two—named “super” and “win”—could actually cycle through multiple rounds of reactions. The team had finally made an enzyme.

By adding additional rounds alternating between structure suggestions using RFDiffusion and screening using PLACER, the team saw the frequency of functional enzymes increase and eventually designed one that had an activity similar to some produced by actual living things. They also showed they could use the same process to design an esterase capable of digesting the bonds in PET, a common plastic.

If that sounds like a lot of work, it clearly was—designing enzymes, especially ones where we know of similar enzymes in living things, will remain a serious challenge. But at least much of it can be done on computers rather than requiring someone to order up the DNA that encodes the enzyme, getting bacteria to make it, and screening for activity. And despite the process involving references to known enzymes, the designed ones didn’t share a lot of sequences in common with them. That suggests there should be added flexibility if we want to design one that will react with esters that living things have never come across.

I’m curious about what might happen if we design an enzyme that is essential for survival, put it in bacteria, and then allow it to evolve for a while. I suspect life could find ways of improving on even our best designs.

Science, 2024. DOI: 10.1126/science.adu2454  (About DOIs).

AI used to design a multi-step enzyme that can digest some plastics Read More »

parrots-struggle-when-told-to-do-something-other-than-mimic-their-peers

Parrots struggle when told to do something other than mimic their peers

There have been many studies on the capability of non-human animals to mimic transitive actions—actions that have a purpose. Hardly any studies have shown that animals are also capable of intransitive actions. Even though intransitive actions have no particular purpose, imitating these non-conscious movements is still thought to help with socialization and strengthen bonds for both animals and humans.

Zoologist Esha Haldar and colleagues from the Comparative Cognition Research group worked with blue-throated macaws, which are critically endangered, at the Loro Parque Fundación in Tenerife. They trained the macaws to perform two intransitive actions, then set up a conflict: Two neighboring macaws were asked to do different actions.

What Haldar and her team found was that individual birds were more likely to perform the same intransitive action as a bird next to them, no matter what they’d been asked to do. This could mean that macaws possess mirror neurons, the same neurons that, in humans, fire when we are watching intransitive movements and cause us to imitate them (at least if these neurons function the way some think they do).

But it wasn’t on purpose

Parrots are already known for their mimicry of transitive actions, such as grabbing an object. Because they are highly social creatures with brains that are large relative to the size of their bodies, they made excellent subjects for a study that gauged how susceptible they were to copying intransitive actions.

Mirroring of intransitive actions, also called automatic imitation, can be measured with what’s called a stimulus-response-compatibility (SRC) test. These tests measure the response time between seeing an intransitive movement (the visual stimulus) and mimicking it (the action). A faster response time indicates a stronger reaction to the stimulus. They also measure the accuracy with which they reproduce the stimulus.

Until now, there have only been three studies that showed non-human animals are capable of copying intransitive actions, but the intransitive actions in these studies were all by-products of transitive actions. Only one of these focused on a parrot species. Haldar and her team would be the first to test directly for animal mimicry of intransitive actions.

Parrots struggle when told to do something other than mimic their peers Read More »

bonobos-recognize-when-humans-are-ignorant,-try-to-help

Bonobos recognize when humans are ignorant, try to help

A lot of human society requires what’s called a “theory of mind”—the ability to infer the mental state of another person and adjust our actions based on what we expect they know and are thinking. We don’t always get this right—it’s easy to get confused about what someone else might be thinking—but we still rely on it to navigate through everything from complicated social situations to avoid bumping into people on the street.

There’s some mixed evidence that other animals have a limited theory of mind, but there are alternate interpretations for most of it. So two researchers at Johns Hopkins, Luke Townrow and Christopher Krupenye, came up with a way of testing whether some of our closest living relatives, the bonobos, could infer the state of mind of a human they were cooperating with. The work clearly showed that the bonobos could tell when their human partner was ignorant.

Now you see it…

The experimental approach is quite simple, and involves a setup familiar to street hustlers: a set of three cups, with a treat placed under one of them. Except in this case, there’s no sleight-of-hand in that the chimp can watch as one experimenter places the treat under a cup, and all of the cups remain stationary throughout the experiment.

To get the treat, however, requires the cooperation of a second human experimenter. That person has to identify the right cup, then give the treat under it to the bonobo. In some experiments, this human can watch the treat being hidden through a transparent partition, and so knows exactly where it is. In others, however, the partition is solid, leaving the human with no idea of which cup might be hiding the food.

This setup means that the bonobo will always know where the food is and will also know whether the human could potentially have the same knowledge.

The bonobos were first familiarized with the setup and got to experience their human partner taking the treat out from under the cup and giving it to them. Once they were familiar with the process, they watched the food being hidden without any partner present, which demonstrated they rarely took any food-directed actions without a good reason to do so. In contrast, when their human partner was present, they were about eight times more likely to point to the cup with the food under it.

Bonobos recognize when humans are ignorant, try to help Read More »