robotics

why-irobot’s-founder-won’t-go-within-10-feet-of-today’s-walking-robots

Why iRobot’s founder won’t go within 10 feet of today’s walking robots

In his post, Brooks recounts being “way too close” to an Agility Robotics Digit humanoid when it fell several years ago. He has not dared approach a walking one since. Even in promotional videos from humanoid companies, Brooks notes, humans are never shown close to moving humanoid robots unless separated by furniture, and even then, the robots only shuffle minimally.

This safety problem extends beyond accidental falls. For humanoids to fulfill their promised role in health care and factory settings, they need certification to operate in zones shared with humans. Current walking mechanisms make such certification virtually impossible under existing safety standards in most parts of the world.

Apollo robot

The humanoid Apollo robot. Credit: Google

Brooks predicts that within 15 years, there will indeed be many robots called “humanoids” performing various tasks. But ironically, they will look nothing like today’s bipedal machines. They will have wheels instead of feet, varying numbers of arms, and specialized sensors that bear no resemblance to human eyes. Some will have cameras in their hands or looking down from their midsections. The definition of “humanoid” will shift, just as “flying cars” now means electric helicopters rather than road-capable aircraft, and “self-driving cars” means vehicles with remote human monitors rather than truly autonomous systems.

The billions currently being invested in forcing today’s rigid, vision-only humanoids to learn dexterity will largely disappear, Brooks argues. Academic researchers are making more progress with systems that incorporate touch feedback, like MIT’s approach using a glove that transmits sensations between human operators and robot hands. But even these advances remain far from the comprehensive touch sensing that enables human dexterity.

Today, few people spend their days near humanoid robots, but Brooks’ 3-meter rule stands as a practical warning of challenges ahead from someone who has spent decades building these machines. The gap between promotional videos and deployable reality remains large, measured not just in years but in fundamental unsolved problems of physics, sensing, and safety.

Why iRobot’s founder won’t go within 10 feet of today’s walking robots Read More »

google-deepmind-unveils-its-first-“thinking”-robotics-ai

Google DeepMind unveils its first “thinking” robotics AI

Imagine that you want a robot to sort a pile of laundry into whites and colors. Gemini Robotics-ER 1.5 would process the request along with images of the physical environment (a pile of clothing). This AI can also call tools like Google search to gather more data. The ER model then generates natural language instructions, specific steps that the robot should follow to complete the given task.

Gemin iRobotics thinking

The two new models work together to “think” about how to complete a task.

Credit: Google

The two new models work together to “think” about how to complete a task. Credit: Google

Gemini Robotics 1.5 (the action model) takes these instructions from the ER model and generates robot actions while using visual input to guide its movements. But it also goes through its own thinking process to consider how to approach each step. “There are all these kinds of intuitive thoughts that help [a person] guide this task, but robots don’t have this intuition,” said DeepMind’s Kanishka Rao. “One of the major advancements that we’ve made with 1.5 in the VLA is its ability to think before it acts.”

Both of DeepMind’s new robotic AIs are built on the Gemini foundation models but have been fine-tuned with data that adapts them to operating in a physical space. This approach, the team says, gives robots the ability to undertake more complex multi-stage tasks, bringing agentic capabilities to robotics.

The DeepMind team tests Gemini robotics with a few different machines, like the two-armed Aloha 2 and the humanoid Apollo. In the past, AI researchers had to create customized models for each robot, but that’s no longer necessary. DeepMind says that Gemini Robotics 1.5 can learn across different embodiments, transferring skills learned from Aloha 2’s grippers to the more intricate hands on Apollo with no specialized tuning.

All this talk of physical agents powered by AI is fun, but we’re still a long way from a robot you can order to do your laundry. Gemini Robotics 1.5, the model that actually controls robots, is still only available to trusted testers. However, the thinking ER model is now rolling out in Google AI Studio, allowing developers to generate robotic instructions for their own physically embodied robotic experiments.

Google DeepMind unveils its first “thinking” robotics AI Read More »

deepmind’s-robotic-ballet:-an-ai-for-coordinating-manufacturing-robots

DeepMind’s robotic ballet: An AI for coordinating manufacturing robots


An AI figures out how robots can get jobs done without getting in each other’s way.

A lot of the stuff we use today is largely made by robots—arms with multiple degrees of freedom positioned along conveyor belts that move in a spectacle of precisely synchronized motions. All this motion is usually programmed by hand, which can take hundreds to thousands of hours. Google’s DeepMind team has developed an AI system called RoboBallet that lets manufacturing robots figure out what to do on their own.

Traveling salesmen

Planning what manufacturing robots should do to get their jobs done efficiently is really hard to automate. You need to solve both task allocation and scheduling—deciding which task should be done by which robot in what order. It’s like the famous traveling salesman problem on steroids. On top of that, there is the question of motion planning; you need to make sure all these robotic arms won’t collide with each other or with all the gear standing around them.

At the end, you’re facing myriad possible combinations where you’ve got to solve not one but three computationally hard problems at the same time. “There are some tools that let you automate motion planning, but task allocation and scheduling are usually done manually,” says Matthew Lai, a research engineer at Google DeepMind. “Solving all three of these problems combined is what we tackled in our work.”

Lai’s team started by generating simulated samples of what are called work cells, areas where teams of robots perform their tasks on a product being manufactured. The work cells contained something called a workpiece, a product on which the robots do work, in this case something to be constructed of aluminum struts placed on a table. Around the table, there were up to eight randomly placed Franka Panda robotic arms, each with 7 degrees of freedom, that were supposed to complete up to 40 tasks on a workpiece. Every task required a robotic arm’s end effector to get within 2.5 centimeters of the right spot on the right strut, approached from the correct angle, then stay there, frozen, for a moment. The pause simulates doing some work.

To make things harder, the team peppered every work cell with random obstacles the robots had to avoid. “We chose to work with up to eight robots, as this is around the sensible maximum for packing robots closely together without them blocking each other all the time,” Lai explains. Forcing the robots to perform 40 tasks on a workpiece was also something the team considered representative of what’s required at real factories.

A setup like this would be a nightmare to tackle using even the most powerful reinforcement-learning algorithms. Lai and his colleagues found a way around it by turning it all into graphs.

Complex relationships

Graphs in Lai’s model comprised nodes and edges. Things like robots, tasks, and obstacles were treated as nodes. Relationships between them were encoded as either one- or bi-directional edges. One-directional edges connected robots with tasks and obstacles because the robots needed information about where the obstacles were and whether the tasks were completed or not. Bidirectional edges connected the robots to each other, because each robot had to know what other robots were doing at each time step to avoid collisions or duplicating tasks.

To read and make sense of the graphs, the team used graph neural networks, a type of artificial intelligence designed to extract relationships between the nodes by passing messages along the edges of the connections among them. This decluttered the data, allowing the researchers to design a system that focused exclusively on what mattered most: finding the most efficient ways to complete tasks while navigating obstacles. After a few days of training on randomly generated work cells using a single Nvidia A100 GPU, the new industrial planning AI, called RoboBallet, could lay out seemingly viable trajectories through complex, previously unseen environments in a matter of seconds.

Most importantly, though, it scaled really well.

Economy of scale

The problem with applying traditional computational methods to complex problems like managing robots at a factory is that the challenge of computation grows exponentially with the number of items you have in your system. Computing the most optimal trajectories for one robot is relatively simple. Doing the same for two is considerably harder; when the number grows to eight, the problem becomes practically intractable.

With RoboBallet, the complexity of computation also grew with the complexity of the system, but at a far slower rate. (The computations grew linearly with the growing number of tasks and obstacles, and quadratically with the number of robots.) According to the team, these computations should make the system feasible for industrial-scale use.

The team wanted to test, however, whether the plans their AI was producing were any good. To check that, Lai and his colleagues computed the most optimal task allocations, schedules, and motions in a few simplified work cells and compared those with results delivered by RoboBallet. In terms of execution time, arguably the most important metric in manufacturing, the AI came very close to what human engineers could do. It wasn’t better than they were—it just provided an answer more quickly.

The team also tested RoboBallet plans on a real-world physical setup of four Panda robots working on an aluminum workpiece, and they worked just as well as in simulations. But Lai says it can do more than just speed up the process of programming robots.

Limping along

RoboBallet, according to DeepMind’s team, also enables us to design better work cells. “Because it works so fast, it would be possible for a designer to try different layouts and different placement or selections of robots in almost real time,” Lai says. This way, engineers at factories would be able to see exactly how much time they would save by adding another robot to a cell or choosing a robot of a different type. Another thing RoboBallet can do is reprogram the work cell on the fly, allowing other robots to fill in when one of them breaks down.

Still, there are a few things that still need ironing out before RoboBallet can come to factories. “There are several simplifications we made,” Lai admits. The first was that the obstacles were decomposed into cuboids. Even the workpiece itself was cubical. While this was somewhat representative of the obstacles and equipment in real factories, there are lots of possible workpieces with more organic shapes. “It would be better to represent those in a more flexible way, like mesh graphs or point clouds,” Lai says. This, however, would likely mean a drop in RoboBallet’s blistering speed.

Another thing is that the robots in Lai’s experiments were identical, while in a real-world work cell, robotic teams are quite often heterogeneous. “That’s why real-world applications would require additional research and engineering specific to the type of application,” Lai says. He adds, though, that the current RoboBallet is already designed with such adaptations in mind—it can be easily extended to support them. And once that’s done, his hope is that it will make factories faster and way more flexible.

“The system would have to be given work cell models, the workpiece models, as well as the list of tasks that need to be done—based on that, RoboBallet would be able to generate a complete plan,” Lai says.

Science Robotics, 2025. DOI: 10.1126/scirobotics.ads1204

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

DeepMind’s robotic ballet: An AI for coordinating manufacturing robots Read More »

a-robot-walks-on-water-thanks-to-evolution’s-solution

A robot walks on water thanks to evolution’s solution

Robots can serve pizza, crawl over alien planets, swim like octopuses and jellyfish, cosplay as humans, and even perform surgery. But can they walk on water?

Rhagobot isn’t exactly the first thing that comes to mind at the mention of a robot. Inspired by Rhagovelia water striders, semiaquatic insects also known as ripple bugs, these tiny bots can glide across rushing streams because of the robotization of an evolutionary adaptation.

Rhagovelia (as opposed to other species of water striders) have fan-like appendages toward the ends of their middle legs that passively open and close depending on how the water beneath them is moving. This is why they appear to glide effortlessly across the water’s surface. Biologist Victor Ortega-Jimenez of the University of California, Berkeley, was intrigued by how such tiny insects can accelerate and pull off rapid turns and other maneuvers, almost as if they are flying across a liquid surface.

“Rhagovelia’s fan serves as an inspiring template for developing self-morphing artificial propellers, providing insights into their biological form and function,” he said in a study recently published in Science. “Such configurations are largely unexplored in semi-aquatic robots.”

Mighty morphin’

It took Ortega-Jimenez five years to figure out how the bugs get around. While Rhagovelia leg fans were thought to morph because they were powered by muscle, he found that the appendages automatically adjusted to the surface tension and elastic forces beneath them, passively opening and closing ten times faster than it takes to blink. They expand immediately when making contact with water and change shape depending on the flow.

By covering an extensive surface area for their size and maintaining their shape when the insects move their legs, Rhagovelia fans generate a tremendous amount of propulsion. They also do double duty. Despite being rigid enough to resist deformation when extended, the fans are still flexible enough to easily collapse, adhering to the claw above to keep from getting in the animal’s way when it’s out of water. It also helps that the insects have hydrophobic legs that repel water that could otherwise weigh them down.

Ortega-Jimenez and his research team observed the leg fans using a scanning electron microscope. If they were going to create a robot based on ripple bugs, they needed to know the exact structure they were going for. After experimenting with cylindrical fans, the researchers found that Rhagovellia fans are actually structures made of many flat barbs with barbules, something which was previously unknown.

A robot walks on water thanks to evolution’s solution Read More »

robots-eating-other-robots:-the-benefits-of-machine-metabolism

Robots eating other robots: The benefits of machine metabolism


If you define “metabolism” loosely enough, these robots may have one.

For decades we’ve been trying to make the robots smarter and more physically capable by mimicking biological intelligence and movement. “But in doing so, we’ve been just replicating the results of biological evolution—I say we need to replicate its methods,” argues Philippe Wyder, a developmental robotics researcher at Columbia University. Wyder led a team that demonstrated a machine with a rudimentary form of what they’re calling a metabolism.

He and his colleagues built a robot that could consume other robots to physically grow, become stronger, more capable, and continue functioning.

Nature’s methods

The idea of robotic metabolism combines various concepts in AI and robotics. The first is artificial life, which Wyder termed “a field where people study the evolution of organisms through computer simulations.” Then there is the idea of modular robots: reconfigurable machines that can change their architecture by rearranging collections of basic modules. That was pioneered in the US by Daniela Rus or Mark Yim at Carnegie Mellon University in the 1990s.

Finally, there is the idea that we need a shift from a goal-oriented design we’ve been traditionally implementing in our machines to a survivability-oriented design found in living organisms, which Magnus Egerstedt proposed in his book Robot Ecology.

Wyder’s team took all these ideas, merged them, and prototyped a robot that could “eat” other robots. “I kind of came at this from many different angles,” Wyder says.

The key source of inspiration, though, was the way nature builds its organisms. There are 20 standard amino acids universally used by life that can be combined into trillions of proteins, forming the building blocks of countless life forms. Wyder started his project by designing a basic robotic module that was intended to play a role roughly equivalent to a single amino acid. This module, called a Truss Link, looked like a rod, being 16 centimeters long and containing batteries, electronic controllers, and servomotors than enabled them to expand, contract, and crawl in a straight line. They had permanent magnets at each end, which let them connect to other rods and form lightweight lattices.

Wyder’s idea was to throw a number of these modules in a confined space to see if they would assemble into more complex structures by bumping into each other. The process might be analogous to how amino acids spontaneously formed simple organic molecules roughly 4 billion years ago.

Robotic growth

The first stage of Wyder’s experiment was set up in a space with a few terrain features, like a drop, a few obstacles, and a standing cylinder. The robots were operated by the team, which directed them to form various structures. Three Truss Links connected with the magnets at one center point formed a three-pointed star. Other structures they formed included a triangle, a diamond with a tail that was a triangle connected with a three-pointed star, or a tetrahedron, and a 3D structure that looked like a triangular pyramid. The robots had to find other Truss Links and make them part of their bodies to grow into more complex forms.

As they were growing, they were also becoming more capable. A single Truss Link could only move in a straight line, a triangle could turn left and right, a diamond with a tail could traverse small bumps, while a tetrahedron could move itself over small walls. Finally, a tetrahedron with a ratchet—an additional Truss Link the robot could use a bit like a walking stick—could assist other robots in forming tetrahedrons, which was a difficult, risky maneuver that took multiple attempts even for the skilled operators.

Still, all this growth in size and capability was orchestrated by the researchers controlling the hardware. The question was whether these self-assembly processes could work with no human overlords around.

“We wanted to know if the Truss Links would meet on their own,” Wyder says. “If the Truss Links are exactly parallel, they will never connect. But being parallel is just one configuration, and there are infinite configurations where they are not parallel.” To check how this would play out, the team used computer simulations of six randomly spawned and randomly moving Truss Links in a walled environment. In 2,000 runs, each 20 minutes long, the modules ended up with a 64 percent chance of forming two three-pointed star shapes; a roughly 8.4 percent of assembling into two triangles, and nearly 45 percent of ending up as a diamond with a tail. (Some of these configurations were intermediates on the pathway to others, so the numbers add up to more than 100 percent.)

When moving randomly, Truss Links could also repair structures after their magnets got disconnected and even replace a malfunctioning Truss Link in the structure with a new one. But did they really metabolize anything?

Searching for purpose

The name “metabolism” comes from the Greek word “metabolē” which means “change.” Wyder’s robots can assemble, grow, reconfigure, rebuild, and, to a limited extent, sustain themselves, which definitely qualifies as change.

But metabolism, as it’s commonly understood, involves consuming materials in ways that extract energy and transform their chemicals. The Truss Links are limited to using prefabricated, compatible modules—they can’t consume some plastic and old lithium-ion batteries and metabolize them into brand-new Truss Links. Whether this qualifies as metabolism depends more on how far we want to stretch the definition than on what the actual robots can do.

And stretching definitions, so far, may be their strongest use case. “I can’t give you a real-world use case,” Wyder acknowledges. “We tried to make the truss robots carry loads from one point to another, but it’s not even included in our paper—it’s a research platform at this point.” The first thing he thinks the robotic metabolism platform is missing is a wider variety of modules. The team used homogeneous modules in this work but is already thinking about branching out. “Life uses around 20 different amino acids to work, so we’re currently focusing on integrating additional modules with various sensors,” Wyder explains. But the robots  are also lacking something way more fundamental: a purpose.

Life evolves to improve the chances of survival. It does so in response to pressures like predators or a challenging environment. A living thing is usually doing its best to avoid dying.

Egerstedt in “Robot Ecology“ argues we should build and program robots the same way with “survivability constraints” in mind. Wyder, in his paper, also claims we need to develop a “self-sustained robot ecology” in the future. But he also thinks we shouldn’t take this life analogy too far. His goal is not creating a robotic ecosystem where robots would hunt and feed on other robots, constantly improving their own designs.

“We would give robots a purpose. Let’s say a purpose is to build a lunar colony,” Wyder says. Survival should be the first objective, because if the platform doesn’t survive on the Moon, it won’t build a lunar colony. Multiple small units would first disperse to explore the area and then assemble into a bigger structure like a building or a crane. “And this large structure would absorb, recycle, or eat, if you will, all these smaller robots to integrate and make use of them,” Wyder claims.

A robotic platform like this, Wyder thinks, should adapt to unexpected circumstances even better than life itself. “There may be a moment where having a third arm would really save your life, but you can’t grow one. A robot, given enough time, won’t have that problem,” he says.

Science Advances, 2025.  DOI: 10.1126/sciadv.adu6897

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Robots eating other robots: The benefits of machine metabolism Read More »

robotic-sucker-can-adapt-to-surroundings-like-an-actual-octopus

Robotic sucker can adapt to surroundings like an actual octopus

This isn’t the first time suction cups were inspired by highly adaptive octopus suckers. Some models have used pressurized chambers meant to push against a surface and conform to it. Others have focused more on matching the morphology of a biological sucker. This has included giving the suckers microdenticles, the tiny tooth-like projections on octopus suckers that give them a stronger grip.

Previous methods of artificial conformation have had some success, but they could be prone to leakage from gaps between the sucker and the surface it is trying to stick to, and they often needed vacuum pumps to operate. Yue and his team created a sucker that was morphologically and mechanically similar to that of an octopus.

Suckers are muscular structures with an extreme flexibility that helps them conform to objects without leakage, contract when gripping objects, and release tension when letting them go. This inspired the researchers to create suckers from a silicone sponge material on the inside and a soft silicone pad on the outside.

For the ultimate biomimicry, Yue thought that the answer to the problems experienced with previous models was to come up with a sucker that simulated the mucus secretion of octopus suckers.

This really sucks

Cephalopod suction was previously thought to be a product of these creatures’ soft, flexible bodies, which can deform easily to adapt to whatever surface it needs to grip. Mucus secretion was mostly overlooked until Yue decided to incorporate it into his robo-suckers.

Mollusk mucus is known to be five times more viscous than water. For Yue’s suckers, an artificial fluidic system, designed to mimic the secretions released by glands on a biological sucker, creates a liquid seal between the sucker and the surface it is adhering to, just about eliminating gaps. It might not have the strength of octopus slime, but water is the next best option for a robot that is going to be immersed in water when it goes exploring, possibly in underwater caves or at the bottom of the ocean.

Robotic sucker can adapt to surroundings like an actual octopus Read More »

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

want-a-humanoid,-open-source-robot-for-just-$3,000?-hugging-face-is-on-it.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.

You may have noticed he said “robots” plural—that’s because there’s a second one. It’s called Reachy Mini, and it looks like a cute, Wall-E-esque statue bust that can turn its head and talk to the user. Among other things, it’s meant to be used to test AI applications, and it’ll run between $250 and $300.

You can sort of think of these products as the equivalent to a Raspberry Pi, but in robot form and for AI developers—Hugging Face’s main customer base.

Hugging Face has previously released AI models meant for robots, as well as a 3D-printable robotic arm. This year, it announced an acquisition of Pollen Robotics, a company that was working on humanoid robots. Hugging Face’s Cadene came to the company by way of Tesla.

For context on the pricing, Tesla’s Optimus Gen 2 humanoid robot (while admittedly much more advanced, at least in theory) is expected to cost at least $20,000.

There is a lot of investment in robotics like this, but there are still big barriers—and price isn’t the only one. There’s battery life, for example; Unitree’s G1 only runs for about two hours on a single charge.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it. Read More »

a-“biohybrid”-robotic-hand-built-using-real-human-muscle-cells

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

When we try to make the muscles thicker and therefore more powerful, cells buried deeper in those thicker structures are cut off from nutrients and oxygen, so they die, undergoing necrosis. In living organisms, this problem is solved by the vascular network. But building artificial vascular networks in lab-grown muscles is still something we can’t do very well. So, Takeuchi and his team had to find their way around the necrosis problem. Their solution was sushi rolling.

The team started by growing thin, flat muscle fibers arranged side by side on a petri dish. This gave all the cells access to nutrients and oxygen, so the muscles turned out robust and healthy. Once all the fibers were grown, Takeuchi and his colleagues rolled them into tubes called MuMuTAs (multiple muscle tissue actuators) like they were preparing sushi rolls. “MuMuTAs were created by culturing thin muscle sheets and rolling them into cylindrical bundles to optimize contractility while maintaining oxygen diffusion,” Takeuchi explains.

A “biohybrid” robotic hand built using real human muscle cells Read More »

robot-with-1,000-muscles-twitches-like-human-while-dangling-from-ceiling

Robot with 1,000 muscles twitches like human while dangling from ceiling

Plans for 279 robots to start

While the Protoclone is a twitching, dangling robotic prototype right now, there’s a lot of tech packed into its body. Protoclone’s sensory system includes four depth cameras in its skull for vision, 70 inertial sensors to track joint positions, and 320 pressure sensors that provide force feedback. This system lets the robot react to visual input and learn by watching humans perform tasks.

As you can probably tell by the video, the current Protoclone prototype is still in an early developmental stage, requiring ceiling suspension for stability. Clone Robotics previously demonstrated components of this technology in 2022 with the release of its robotic hand, which used the same Myofiber muscle system.

Artificial Muscles Robotic Arm Full Range of Motion + Static Strength Test (V11).

A few months ago, Clone Robotics also showed off a robotic torso powered by the same technology.

Torso 2 by Clone with Actuated Abdomen.

Other companies’ robots typically use other types of actuators, such as solenoids and electric motors. Clone’s pressure-based muscle system is an interesting approach, though getting Protoclone to stand and balance without the need for suspension or umbilicals may still prove a challenge.

Clone Robotics plans to start its production with 279 units called Clone Alpha, with plans to open preorders later in 2025. The company has not announced pricing for these initial units, but given the engineering challenges still ahead, a functional release any time soon seems optimistic.

Robot with 1,000 muscles twitches like human while dangling from ceiling Read More »

to-help-ais-understand-the-world,-researchers-put-them-in-a-robot

To help AIs understand the world, researchers put them in a robot


There’s a difference between knowing a word and knowing a concept.

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

While the idea of teaching AIs the same way we teach little babies is not new—we applied it to standard neural nets that associated words with visuals. Researchers also tried teaching an AI using a video feed from a GoPro strapped to a human baby. The problem is babies do way more than just associate items with words when they learn. They touch everything—grasp things, manipulate them, throw stuff around, and this way, they learn to think and plan their actions in language. An abstract AI model couldn’t do any of that, so Vijayaraghavan’s team gave one an embodied experience—their AI was trained in an actual robot that could interact with the world.

Vijayaraghavan’s robot was a fairly simple system with an arm and a gripper that could pick objects up and move them around. Vision was provided by a simple RGB camera feeding videos in a somewhat crude 64×64 pixels resolution.

 The robot and the camera were placed in a workspace, put in front of a white table with blocks painted green, yellow, red, purple, and blue. The robot’s task was to manipulate those blocks in response to simple prompts like “move red left,” “move blue right,” or “put red on blue.” All that didn’t seem particularly challenging. What was challenging, though, was building an AI that could process all those words and movements in a manner similar to humans. “I don’t want to say we tried to make the system biologically plausible,” Vijayaraghavan told Ars. “Let’s say we tried to draw inspiration from the human brain.”

Chasing free energy

The starting point for Vijayaraghavan’s team was the free energy principle, a hypothesis that the brain constantly makes predictions about the world based on internal models, then updates these predictions based on sensory input. The idea is that we first think of an action plan to achieve a desired goal, and then this plan is updated in real time based on what we experience during execution. This goal-directed planning scheme, if the hypothesis is correct, governs everything we do, from picking up a cup of coffee to landing a dream job.

All that is closely intertwined with language. Neuroscientists at the University of Parma found that motor areas in the brain got activated when the participants in their study listened to action-related sentences. To emulate that in a robot, Vijayaraghavan used four neural networks working in a closely interconnected system. The first was responsible for processing visual data coming from the camera. It was tightly integrated with a second neural net that handled proprioception: all the processes that ensured the robot was aware of its position and the movement of its body. This second neural net also built internal models of actions necessary to manipulate blocks on the table. Those two neural nets were additionally hooked up to visual memory and attention modules that enabled them to reliably focus on the chosen object and separate it from the image’s background.

The third neural net was relatively simple and processed language using vectorized representations of those “move red right” sentences. Finally, the fourth neural net worked as an associative layer and predicted the output of the previous three at every time step. “When we do an action, we don’t always have to verbalize it, but we have this verbalization in our minds at some point,” Vijayaraghavan says. The AI he and his team built was meant to do just that: seamlessly connect language, proprioception, action planning, and vision.

When the robotic brain was up and running, they started teaching it some of the possible combinations of commands and sequences of movements. But they didn’t teach it all of them.

The birth of compositionality

In 2016, Brenden Lake, a professor of psychology and data science, published a paper in which his team named a set of competencies machines need to master to truly learn and think like humans. One of them was compositionality: the ability to compose or decompose a whole into parts that can be reused. This reuse lets them generalize acquired knowledge to new tasks and situations. “The compositionality phase is when children learn to combine words to explain things. They [initially] learn the names of objects, the names of actions, but those are just single words. When they learn this compositionality concept, their ability to communicate kind of explodes,” Vijayaraghavan explains.

The AI his team built was made for this exact purpose: to see if it would develop compositionality. And it did.

Once the robot learned how certain commands and actions were connected, it also learned to generalize that knowledge to execute commands it never heard before. recognizing the names of actions it had not performed and then performing them on combinations of blocks it had never seen. Vijayaraghavan’s AI figured out the concept of moving something to the right or the left or putting an item on top of something. It could also combine words to name previously unseen actions, like putting a blue block on a red one.

While teaching robots to extract concepts from language has been done before, those efforts were focused on making them understand how words were used to describe visuals. Vijayaragha built on that to include proprioception and action planning, basically adding a layer that integrated sense and movement to the way his robot made sense of the world.

But some issues are yet to overcome. The AI had very limited workspace. The were only a few objects and all had a single, cubical shape. The vocabulary included only names of colors and actions, so no modifiers, adjectives, or adverbs. Finally, the robot had to learn around 80 percent of all possible combinations of nouns and verbs before it could generalize well to the remaining 20 percent. Its performance was worse when those ratios dropped to 60/40 and 40/60.

But it’s possible that just a bit more computing power could fix this. “What we had for this study was a single RTX 3090 GPU, so with the latest generation GPU, we could solve a lot of those issues,” Vijayaraghavan argued. That’s because the team hopes that adding more words and more actions won’t result in a dramatic need for computing power. “We want to scale the system up. We have a humanoid robot with cameras in its head and two hands that can do way more than a single robotic arm. So that’s the next step: using it in the real world with real world robots,” Vijayaraghavan said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adp0751

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

To help AIs understand the world, researchers put them in a robot Read More »

this-mantis-shrimp-inspired-robotic-arm-can-crack-an-egg

This mantis shrimp-inspired robotic arm can crack an egg

This isn’t the first time scientists have looked to the mantis shrimp as an inspiration for robotics. In 2021, we reported on a Harvard researcher who developed a biomechanical model for the mantis shrimp’s mighty appendage and built a tiny robot to mimic that movement. What’s unusual in the mantis shrimp is that there is a one-millisecond delay between when the unlatching and the snapping action occurs.

The Harvard team identified four distinct striking phases and confirmed it’s the geometry of the mechanism that produces the rapid acceleration after the initial unlatching by the sclerites. The short delay may help reduce wear and tear of the latching mechanisms over repeated use.

New types of motion

The operating principle of the Hyperelastic Torque Reversal Mechanism (HeTRM) involves compressing an elastomeric joint until it reaches a critical point, where stored energy is instantaneously released.

The operating principle of the Hyperelastic Torque Reversal Mechanism (HeTRM) involves compressing an elastomeric joint until it reaches a critical point, where stored energy is instantaneously released. Credit: Science Robotics, 2025

Co-author Kyu-Jin Cho of Seoul National University became interested in soft robotics as a graduate student, when he participated in the RoboSoft Grand Challenge. Part of his research involved testing the strength of so-called “soft robotic manipulators,” a type often used in assembly lines for welding or painting, for example. He noticed some unintended deformations in the shape under applied force and realized that the underlying mechanism was similar to how the mantis shrimp punches or how fleas manage to jump so high and far relative to their size.

In fact, Cho’s team previously built a flea-inspired catapult mechanism for miniature jumping robots, using the Hyperelastic Torque Reversal Mechanism (HeTRM) his lab developed. Exploiting torque reversal usually involves incorporating complicated mechanical components. However, “I realized that applying [these] principles to soft robotics could enable the creation of new types of motion without complex mechanisms,” Cho said.

Now he’s built on that work to incorporate the HeTRM into a soft robotic arm that relies upon material properties rather than structural design. It’s basically a soft beam with alternating hyperelastic and rigid segments.

“Our robot is made of soft, stretchy materials, kind of like rubber,” said Cho. “Inside, it has a special part that stores energy and releases it all at once—BAM!—to make the robot move super fast. It works a bit like how a bent tree branch snaps back quickly or how a flea jumps really far. This robot can grab things like a hand, crawl across the floor, or even jump high, and it all happens just by pulling on a simple muscle.”

This mantis shrimp-inspired robotic arm can crack an egg Read More »