robotics

research-roundup:-7-stories-we-almost-missed

Research roundup: 7 stories we almost missed


Ping-pong bots, drumming chimps, picking styles of two jazz greats, and an ancient underground city’s soundscape

Time lapse photos show a new ping-pong-playing robot performing a top spin. Credit: David Nguyen, Kendrick Cancio and Sangbae Kim

It’s a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we’ve featured year-end roundups of cool science stories we (almost) missed. This year, we’re experimenting with a monthly collection. May’s list includes a nifty experiment to make a predicted effect of special relativity visible; a ping-pong playing robot that can return hits with 88 percent accuracy; and the discovery of the rare genetic mutation that makes orange cats orange, among other highlights.

Special relativity made visible

The Terrell-Penrose-Effect: Fast objects appear rotated

Credit: TU Wien

Perhaps the most well-known feature of Albert Einstein’s special theory of relativity is time dilation and length contraction. In 1959, two physicists predicted another feature of relativistic motion: an object moving near the speed of light should also appear to be rotated. It’s not been possible to demonstrate this experimentally, however—until now. Physicists at the Vienna University of Technology figured out how to reproduce this rotational effect in the lab using laser pulses and precision cameras, according to a paper published in the journal Communications Physics.

They found their inspiration in art, specifically an earlier collaboration with an artist named Enar de Dios Rodriguez, who collaborated with VUT and the University of Vienna on a project involving ultra-fast photography and slow light. For this latest research, they used objects shaped like a cube and a sphere and moved them around the lab while zapping them with ultrashort laser pulses, recording the flashes with a high-speed camera.

Getting the timing just right effectively yields similar results to a light speed of 2 m/s. After photographing the objects many times using this method, the team then combined the still images into a single image. The results: the cube looked twisted and the sphere’s North Pole was in a different location—a demonstration of the rotational effect predicted back in 1959.

DOI: Communications Physics, 2025. 10.1038/s42005-025-02003-6  (About DOIs).

Drumming chimpanzees

A chimpanzee feeling the rhythm. Credit: Current Biology/Eleuteri et al., 2025.

Chimpanzees are known to “drum” on the roots of trees as a means of communication, often combining that action with what are known as “pant-hoot” vocalizations (see above video). Scientists have found that the chimps’ drumming exhibits key elements of musical rhythm much like humans, according to  a paper published in the journal Current Biology—specifically non-random timing and isochrony. And chimps from different geographical regions have different drumming rhythms.

Back in 2022, the same team observed that individual chimps had unique styles of “buttress drumming,” which served as a kind of communication, letting others in the same group know their identity, location, and activity. This time around they wanted to know if this was also true of chimps living in different groups and whether their drumming was rhythmic in nature. So they collected video footage of the drumming behavior among 11 chimpanzee communities across six populations in East Africa (Uganda) and West Africa (Ivory Coast), amounting to 371 drumming bouts.

Their analysis of the drum patterns confirmed their hypothesis. The western chimps drummed in regularly spaced hits, used faster tempos, and started drumming earlier during their pant-hoot vocalizations. Eastern chimps would alternate between shorter and longer spaced hits. Since this kind of rhythmic percussion is one of the earliest evolved forms of human musical expression and is ubiquitous across cultures, findings such as this could shed light on how our love of rhythm evolved.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.04.019  (About DOIs).

Distinctive styles of two jazz greats

Wes Montgomery (left)) and Joe Pass (right) playing guitars

Jazz lovers likely need no introduction to Joe Pass and Wes Montgomery, 20th century guitarists who influenced generations of jazz musicians with their innovative techniques. Montgomery, for instance, didn’t use a pick, preferring to pluck the strings with his thumb—a method he developed because he practiced at night after working all day as a machinist and didn’t want to wake his children or neighbors. Pass developed his own range of picking techniques, including fingerpicking, hybrid picking, and “flat picking.”

Chirag Gokani and Preston Wilson, both with Applied Research Laboratories and the University of Texas, Austin, greatly admired both Pass and Montgomery and decided to explore the underlying the acoustics of their distinctive playing, modeling the interactions of the thumb, fingers, and pick with a guitar string. They described their research during a meeting of the Acoustical Society of America in New Orleans, LA.

Among their findings: Montgomery achieved his warm tone by playing closer to the bridge and mostly plucking at the string. Pass’s rich tone arose from a combination of using a pick and playing closer to the guitar neck. There were also differences in how much a thumb, finger, and pick slip off the string:  use of the thumb (Montgomery) produced more of a “pluck” compared to the pick (Pass), which produced more of a “strike.” Gokani and Wilson think their model could be used to synthesize digital guitars with a more realistic sound, as well as helping guitarists better emulate Pass and Montgomery.

Sounds of an ancient underground city

A collection of images from the underground tunnels of Derinkuyu.

Credit: Sezin Nas

Turkey is home to the underground city Derinkuyu, originally carved out inside soft volcanic rock around the 8th century BCE. It was later expanded to include four main ventilation channels (and some 50,000 smaller shafts) serving seven levels, which could be closed off from the inside with a large rolling stone. The city could hold up to 20,000 people and it  was connected to another underground city, Kaymakli, via tunnels. Derinkuyu helped protect Arab Muslims during the Arab-Byzantine wars, served as a refuge from the Ottomans in the 14th century, and as a haven for Armenians escaping persecution in the early 20th century, among other functions.

The tunnels were rediscovered in the 1960s and about half of the city has been open to visitors since 2016. The site is naturally of great archaeological interest, but there has been little to no research on the acoustics of the site, particularly the ventilation channels—one of Derinkuyu’s most unique features, according to Sezin Nas, an architectural acoustician at Istanbul Galata University in Turkey.  She gave a talk at a meeting of the Acoustical Society of America in New Orleans, LA, about her work on the site’s acoustic environment.

Nas analyzed a church, a living area, and a kitchen, measuring sound sources and reverberation patterns, among other factors, to create a 3D virtual soundscape. The hope is that a better understanding of this aspect of Derinkuyu could improve the design of future underground urban spaces—as well as one day using her virtual soundscape to enable visitors to experience the sounds of the city themselves.

MIT’s latest ping-pong robot

Robots playing ping-pong have been a thing since the 1980s, of particular interest to scientists because it requires the robot to combine the slow, precise ability to grasp and pick up objects with dynamic, adaptable locomotion. Such robots need high-speed machine vision, fast motors and actuators, precise control, and the ability to make accurate predictions in real time, not to mention being able to develop a game strategy. More recent designs use AI techniques to allow the robots to “learn” from prior data to improve their performance.

MIT researchers have built their own version of a ping-pong playing robot, incorporating a lightweight design and the ability to precisely return shots. They built on prior work developing the Humanoid, a small bipedal two-armed robot—specifically, modifying the Humanoid’s arm by adding an extra degree of freedom to the wrist so the robot could control a ping-pong paddle. They tested their robot by mounting it on a ping-pong table and lobbing 150 balls at it from the other side of the table, capturing the action with high-speed cameras.

The new bot can execute three different swing types (loop, drive, and chip) and during the trial runs it returned the ball with impressive accuracy across all three types: 88.4 percent, 89.2 percent, and 87.5 percent, respectively. Subsequent tweaks to theirrystem brought the robot’s strike speed up to 19 meters per second (about 42 MPH), close to the 12 to 25 meters per second of advanced human players. The addition of control algorithms gave the robot the ability to aim. The robot still has limited mobility and reach because it has to be fixed to the ping-pong table but the MIT researchers plan to rig it to a gantry or wheeled platform in the future to address that shortcoming.

Why orange cats are orange

an orange tabby kitten

Cat lovers know orange cats are special for more than their unique coloring, but that’s the quality that has intrigued scientists for almost a century. Sure, lots of animals have orange, ginger, or yellow hues, like tigers, orangutans, and golden retrievers. But in domestic cats that color is specifically linked to sex. Almost all orange cats are male. Scientists have now identified the genetic mutation responsible and it appears to be unique to cats, according to a paper published in the journal Current Biology.

Prior work had narrowed down the region on the X chromosome most likely to contain the relevant mutation. The scientists knew that females usually have just one copy of the mutation and in that case have tortoiseshell (partially orange) coloring, although in rare cases, a female cat will be orange if both X chromosomes have the mutation. Over the last five to ten years, there has been an explosion in genome resources (including complete sequenced genomes) for cats which greatly aided the team’s research, along with taking additional DNA samples from cats at spay and neuter clinics.

From an initial pool of 51 candidate variants, the scientists narrowed it down to three genes, only one of which was likely to play any role in gene regulation: Arhgap36. It wasn’t known to play any role in pigment cells in humans, mice, or non-orange cats. But orange cats are special; their mutation (sex-linked orange) turns on Arhgap36 expression in pigment cells (and only pigment cells), thereby interfering with the molecular pathway that controls coat color in other orange-shaded mammals. The scientists suggest that this is an example of how genes can acquire new functions, thereby enabling species to better adapt and evolve.

DOI: Current Biology, 2025. 10.1016/j.cub.2025.03.075  (About DOIs).

Not a Roman “massacre” after all

Two of the skeletons excavated by Mortimer Wheeler in the 1930s, dating from the 1st century AD.

Credit: Martin Smith

In 1936, archaeologists excavating the Iron Age hill fort Maiden Castle in the UK unearthed dozens of human skeletons, all showing signs of lethal injuries to the head and upper body—likely inflicted with weaponry. At the time, this was interpreted as evidence of a pitched battle between the Britons of the local Durotriges tribe and invading Romans. The Romans slaughtered the native inhabitants, thereby bringing a sudden violent end to the Iron Age. At least that’s the popular narrative that has prevailed ever since in countless popular articles, books, and documentaries.

But a paper published in the Oxford Journal of Archaeology calls that narrative into question. Archaeologists at Bournemouth University have re-analyzed those burials, incorporating radiocarbon dating into their efforts. They concluded that those individuals didn’t die in a single brutal battle. Rather, it was Britons killing other Britons over multiple generations between the first century BCE and the first century CE—most likely in periodic localized outbursts of violence in the lead-up to the Roman conquest of Britain. It’s possible there are still many human remains waiting to be discovered at the site, which could shed further light on what happened at Maiden Castle.

DOI: Oxford Journal of Archaeology, 2025. 10.1111/ojoa.12324  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Research roundup: 7 stories we almost missed Read More »

want-a-humanoid,-open-source-robot-for-just-$3,000?-hugging-face-is-on-it.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it.

You may have noticed he said “robots” plural—that’s because there’s a second one. It’s called Reachy Mini, and it looks like a cute, Wall-E-esque statue bust that can turn its head and talk to the user. Among other things, it’s meant to be used to test AI applications, and it’ll run between $250 and $300.

You can sort of think of these products as the equivalent to a Raspberry Pi, but in robot form and for AI developers—Hugging Face’s main customer base.

Hugging Face has previously released AI models meant for robots, as well as a 3D-printable robotic arm. This year, it announced an acquisition of Pollen Robotics, a company that was working on humanoid robots. Hugging Face’s Cadene came to the company by way of Tesla.

For context on the pricing, Tesla’s Optimus Gen 2 humanoid robot (while admittedly much more advanced, at least in theory) is expected to cost at least $20,000.

There is a lot of investment in robotics like this, but there are still big barriers—and price isn’t the only one. There’s battery life, for example; Unitree’s G1 only runs for about two hours on a single charge.

Want a humanoid, open source robot for just $3,000? Hugging Face is on it. Read More »

a-“biohybrid”-robotic-hand-built-using-real-human-muscle-cells

A “biohybrid” robotic hand built using real human muscle cells

Biohybrid robots work by combining biological components like muscles, plant material, and even fungi with non-biological materials. While we are pretty good at making the non-biological parts work, we’ve always had a problem with keeping the organic components alive and well. This is why machines driven by biological muscles have always been rather small and simple—up to a couple centimeters long and typically with only a single actuating joint.

“Scaling up biohybrid robots has been difficult due to the weak contractile force of lab-grown muscles, the risk of necrosis in thick muscle tissues, and the challenge of integrating biological actuators with artificial structures,” says Shoji Takeuchi, a professor at the Tokyo University, Japan. Takeuchi led a research team that built a full-size, 18 centimeter-long biohybrid human-like hand with all five fingers driven by lab-grown human muscles.

Keeping the muscles alive

Out of all the roadblocks that keep us from building large-scale biohybrid robots, necrosis has probably been the most difficult to overcome. Growing muscles in a lab usually means a liquid medium to supply nutrients and oxygen to muscle cells seeded on petri dishes or applied to gel scaffoldings. Since these cultured muscles are small and ideally flat, nutrients and oxygen from the medium can easily reach every cell in the growing culture.

When we try to make the muscles thicker and therefore more powerful, cells buried deeper in those thicker structures are cut off from nutrients and oxygen, so they die, undergoing necrosis. In living organisms, this problem is solved by the vascular network. But building artificial vascular networks in lab-grown muscles is still something we can’t do very well. So, Takeuchi and his team had to find their way around the necrosis problem. Their solution was sushi rolling.

The team started by growing thin, flat muscle fibers arranged side by side on a petri dish. This gave all the cells access to nutrients and oxygen, so the muscles turned out robust and healthy. Once all the fibers were grown, Takeuchi and his colleagues rolled them into tubes called MuMuTAs (multiple muscle tissue actuators) like they were preparing sushi rolls. “MuMuTAs were created by culturing thin muscle sheets and rolling them into cylindrical bundles to optimize contractility while maintaining oxygen diffusion,” Takeuchi explains.

A “biohybrid” robotic hand built using real human muscle cells Read More »

robot-with-1,000-muscles-twitches-like-human-while-dangling-from-ceiling

Robot with 1,000 muscles twitches like human while dangling from ceiling

Plans for 279 robots to start

While the Protoclone is a twitching, dangling robotic prototype right now, there’s a lot of tech packed into its body. Protoclone’s sensory system includes four depth cameras in its skull for vision, 70 inertial sensors to track joint positions, and 320 pressure sensors that provide force feedback. This system lets the robot react to visual input and learn by watching humans perform tasks.

As you can probably tell by the video, the current Protoclone prototype is still in an early developmental stage, requiring ceiling suspension for stability. Clone Robotics previously demonstrated components of this technology in 2022 with the release of its robotic hand, which used the same Myofiber muscle system.

Artificial Muscles Robotic Arm Full Range of Motion + Static Strength Test (V11).

A few months ago, Clone Robotics also showed off a robotic torso powered by the same technology.

Torso 2 by Clone with Actuated Abdomen.

Other companies’ robots typically use other types of actuators, such as solenoids and electric motors. Clone’s pressure-based muscle system is an interesting approach, though getting Protoclone to stand and balance without the need for suspension or umbilicals may still prove a challenge.

Clone Robotics plans to start its production with 279 units called Clone Alpha, with plans to open preorders later in 2025. The company has not announced pricing for these initial units, but given the engineering challenges still ahead, a functional release any time soon seems optimistic.

Robot with 1,000 muscles twitches like human while dangling from ceiling Read More »

to-help-ais-understand-the-world,-researchers-put-them-in-a-robot

To help AIs understand the world, researchers put them in a robot


There’s a difference between knowing a word and knowing a concept.

Large language models like ChatGPT display conversational skills, but the problem is they don’t really understand the words they use. They are primarily systems that interact with data obtained from the real world but not the real world itself. Humans, on the other hand, associate language with experiences. We know what the word “hot” means because we’ve been burned at some point in our lives.

Is it possible to get an AI to achieve a human-like understanding of language? A team of researchers at the Okinawa Institute of Science and Technology built a brain-inspired AI model comprising multiple neural networks. The AI was very limited—it could learn a total of just five nouns and eight verbs. But their AI seems to have learned more than just those words; it learned the concepts behind them.

Babysitting robotic arms

“The inspiration for our model came from developmental psychology. We tried to emulate how infants learn and develop language,” says Prasanna Vijayaraghavan, a researcher at the Okinawa Institute of Science and Technology and the lead author of the study.

While the idea of teaching AIs the same way we teach little babies is not new—we applied it to standard neural nets that associated words with visuals. Researchers also tried teaching an AI using a video feed from a GoPro strapped to a human baby. The problem is babies do way more than just associate items with words when they learn. They touch everything—grasp things, manipulate them, throw stuff around, and this way, they learn to think and plan their actions in language. An abstract AI model couldn’t do any of that, so Vijayaraghavan’s team gave one an embodied experience—their AI was trained in an actual robot that could interact with the world.

Vijayaraghavan’s robot was a fairly simple system with an arm and a gripper that could pick objects up and move them around. Vision was provided by a simple RGB camera feeding videos in a somewhat crude 64×64 pixels resolution.

 The robot and the camera were placed in a workspace, put in front of a white table with blocks painted green, yellow, red, purple, and blue. The robot’s task was to manipulate those blocks in response to simple prompts like “move red left,” “move blue right,” or “put red on blue.” All that didn’t seem particularly challenging. What was challenging, though, was building an AI that could process all those words and movements in a manner similar to humans. “I don’t want to say we tried to make the system biologically plausible,” Vijayaraghavan told Ars. “Let’s say we tried to draw inspiration from the human brain.”

Chasing free energy

The starting point for Vijayaraghavan’s team was the free energy principle, a hypothesis that the brain constantly makes predictions about the world based on internal models, then updates these predictions based on sensory input. The idea is that we first think of an action plan to achieve a desired goal, and then this plan is updated in real time based on what we experience during execution. This goal-directed planning scheme, if the hypothesis is correct, governs everything we do, from picking up a cup of coffee to landing a dream job.

All that is closely intertwined with language. Neuroscientists at the University of Parma found that motor areas in the brain got activated when the participants in their study listened to action-related sentences. To emulate that in a robot, Vijayaraghavan used four neural networks working in a closely interconnected system. The first was responsible for processing visual data coming from the camera. It was tightly integrated with a second neural net that handled proprioception: all the processes that ensured the robot was aware of its position and the movement of its body. This second neural net also built internal models of actions necessary to manipulate blocks on the table. Those two neural nets were additionally hooked up to visual memory and attention modules that enabled them to reliably focus on the chosen object and separate it from the image’s background.

The third neural net was relatively simple and processed language using vectorized representations of those “move red right” sentences. Finally, the fourth neural net worked as an associative layer and predicted the output of the previous three at every time step. “When we do an action, we don’t always have to verbalize it, but we have this verbalization in our minds at some point,” Vijayaraghavan says. The AI he and his team built was meant to do just that: seamlessly connect language, proprioception, action planning, and vision.

When the robotic brain was up and running, they started teaching it some of the possible combinations of commands and sequences of movements. But they didn’t teach it all of them.

The birth of compositionality

In 2016, Brenden Lake, a professor of psychology and data science, published a paper in which his team named a set of competencies machines need to master to truly learn and think like humans. One of them was compositionality: the ability to compose or decompose a whole into parts that can be reused. This reuse lets them generalize acquired knowledge to new tasks and situations. “The compositionality phase is when children learn to combine words to explain things. They [initially] learn the names of objects, the names of actions, but those are just single words. When they learn this compositionality concept, their ability to communicate kind of explodes,” Vijayaraghavan explains.

The AI his team built was made for this exact purpose: to see if it would develop compositionality. And it did.

Once the robot learned how certain commands and actions were connected, it also learned to generalize that knowledge to execute commands it never heard before. recognizing the names of actions it had not performed and then performing them on combinations of blocks it had never seen. Vijayaraghavan’s AI figured out the concept of moving something to the right or the left or putting an item on top of something. It could also combine words to name previously unseen actions, like putting a blue block on a red one.

While teaching robots to extract concepts from language has been done before, those efforts were focused on making them understand how words were used to describe visuals. Vijayaragha built on that to include proprioception and action planning, basically adding a layer that integrated sense and movement to the way his robot made sense of the world.

But some issues are yet to overcome. The AI had very limited workspace. The were only a few objects and all had a single, cubical shape. The vocabulary included only names of colors and actions, so no modifiers, adjectives, or adverbs. Finally, the robot had to learn around 80 percent of all possible combinations of nouns and verbs before it could generalize well to the remaining 20 percent. Its performance was worse when those ratios dropped to 60/40 and 40/60.

But it’s possible that just a bit more computing power could fix this. “What we had for this study was a single RTX 3090 GPU, so with the latest generation GPU, we could solve a lot of those issues,” Vijayaraghavan argued. That’s because the team hopes that adding more words and more actions won’t result in a dramatic need for computing power. “We want to scale the system up. We have a humanoid robot with cameras in its head and two hands that can do way more than a single robotic arm. So that’s the next step: using it in the real world with real world robots,” Vijayaraghavan said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adp0751

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

To help AIs understand the world, researchers put them in a robot Read More »

this-mantis-shrimp-inspired-robotic-arm-can-crack-an-egg

This mantis shrimp-inspired robotic arm can crack an egg

This isn’t the first time scientists have looked to the mantis shrimp as an inspiration for robotics. In 2021, we reported on a Harvard researcher who developed a biomechanical model for the mantis shrimp’s mighty appendage and built a tiny robot to mimic that movement. What’s unusual in the mantis shrimp is that there is a one-millisecond delay between when the unlatching and the snapping action occurs.

The Harvard team identified four distinct striking phases and confirmed it’s the geometry of the mechanism that produces the rapid acceleration after the initial unlatching by the sclerites. The short delay may help reduce wear and tear of the latching mechanisms over repeated use.

New types of motion

The operating principle of the Hyperelastic Torque Reversal Mechanism (HeTRM) involves compressing an elastomeric joint until it reaches a critical point, where stored energy is instantaneously released.

The operating principle of the Hyperelastic Torque Reversal Mechanism (HeTRM) involves compressing an elastomeric joint until it reaches a critical point, where stored energy is instantaneously released. Credit: Science Robotics, 2025

Co-author Kyu-Jin Cho of Seoul National University became interested in soft robotics as a graduate student, when he participated in the RoboSoft Grand Challenge. Part of his research involved testing the strength of so-called “soft robotic manipulators,” a type often used in assembly lines for welding or painting, for example. He noticed some unintended deformations in the shape under applied force and realized that the underlying mechanism was similar to how the mantis shrimp punches or how fleas manage to jump so high and far relative to their size.

In fact, Cho’s team previously built a flea-inspired catapult mechanism for miniature jumping robots, using the Hyperelastic Torque Reversal Mechanism (HeTRM) his lab developed. Exploiting torque reversal usually involves incorporating complicated mechanical components. However, “I realized that applying [these] principles to soft robotics could enable the creation of new types of motion without complex mechanisms,” Cho said.

Now he’s built on that work to incorporate the HeTRM into a soft robotic arm that relies upon material properties rather than structural design. It’s basically a soft beam with alternating hyperelastic and rigid segments.

“Our robot is made of soft, stretchy materials, kind of like rubber,” said Cho. “Inside, it has a special part that stores energy and releases it all at once—BAM!—to make the robot move super fast. It works a bit like how a bent tree branch snaps back quickly or how a flea jumps really far. This robot can grab things like a hand, crawl across the floor, or even jump high, and it all happens just by pulling on a simple muscle.”

This mantis shrimp-inspired robotic arm can crack an egg Read More »

robotic-hand-helps-pianists-overcome-“ceiling-effect”

Robotic hand helps pianists overcome “ceiling effect”

Fast and complex multi-finger movements generated by the hand exoskeleton. Credit: Shinichi Furuya

When it comes to fine-tuned motor skills like playing the piano, practice, they say, makes perfect. But expert musicians often experience a “ceiling effect,” in which their skill level plateaus after extensive training. Passive training using a robotic exoskeleton hand could help pianists overcome that ceiling effect, according to a paper published in the journal Science Robotics.

“I’m a pianist, but I [injured] my hand because of overpracticing,” coauthor Shinichi Furuya of Kabushiki Keisha Sony Computer Science Kenkyujo told New Scientist. “I was suffering from this dilemma, between overpracticing and the prevention of the injury, so then I thought, I have to think about some way to improve my skills without practicing.” Recalling that his former teachers used to place their hands over his to show him how to play more advanced pieces, he wondered if he could achieve the same effect with a robotic hand.

So Furuya et al. used a custom-made exoskeleton robot hand capable of moving individual fingers on the right hand independently, flexing and extending the joints as needed. Per the authors, prior studies with robotic exoskeletons focused on simpler movements, such as assisting in the movement of limbs stabilizing body posture, or helping grasp objects. That sets the custom robotic hand used in these latest experiments apart from those used for haptics in virtual environments.

A helping robot hand

A total of 118 pianists participated in three different experiments. In the first, 30 pianists performed a designated “chord trill” motor task with the piano at home every day for two weeks: first simultaneously striking D and F keys with the right index and ring fingers, then striking the E and G keys with the right middle and little fingers. “We used this task because it has been widely recognized as technically challenging to play quickly and accurately,” the authors explained. It appears in such classical pieces as Chopin’s Etude Op. 25. No. 6, Maurice Ravel’s “Ondine,” and the first movement of Beethoven’s Piano Sonata No. 3.

Robotic hand helps pianists overcome “ceiling effect” Read More »

delve-into-the-physics-of-the-hula-hoop

Delve into the physics of the Hula-Hoop

High-speed video of experiments on a robotic hula hooper, whose hourglass form holds the hoop up and in place.

Some version of the Hula-Hoop has been around for millennia, but the popular plastic version was introduced by Wham-O in the 1950s and quickly became a fad. Now, researchers have taken a closer look at the underlying physics of the toy, revealing that certain body types are better at keeping the spinning hoops elevated than others, according to a new paper published in the Proceedings of the National Academy of Sciences.

“We were surprised that an activity as popular, fun, and healthy as hula hooping wasn’t understood even at a basic physics level,” said co-author Leif Ristroph of New York University. “As we made progress on the research, we realized that the math and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

Ristroph’s lab frequently addresses these kinds of colorful real-world puzzles. For instance, in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. In 2021, the Ristroph lab looked into the formation processes underlying so-called “stone forests” common in certain regions of China and Madagascar.

In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design, and measured the flow of water through the valve in both directions at various pressures. They found the water flowed about two times slower in the nonpreferred direction. In 2022, Ristroph studied the surpassingly complex aerodynamics of what makes a good paper airplane—specifically, what is needed for smooth gliding.

Girl twirling a Hula hoop, 1958

Girl twirling a Hula-Hoop in 1958 Credit: George Garrigues/CC BY-SA 3.0

And last year, Ristroph’s lab cracked the conundrum of physicist Richard Feynman’s “reverse sprinkler” problem, concluding that the reverse sprinkler rotates a good 50 times slower than a regular sprinkler but operates along similar mechanisms. The secret is hidden inside the sprinkler, where there are jets that make it act like an inside-out rocket. The internal jets don’t collide head-on; rather, as water flows around the bends in the sprinkler arms, it is slung outward by centrifugal force, leading to asymmetric flow.

Delve into the physics of the Hula-Hoop Read More »

new-drone-has-legs-for-landing-gear,-enabling-efficient-launches

New drone has legs for landing gear, enabling efficient launches


The RAVEN walks, it flies, it hops over obstacles, and it’s efficient.

The RAVEN in action. Credit: EPFL/Alain Herzog

Most drones on the market are rotary-wing quadcopters, which can conveniently land and take off almost anywhere. The problem is they are less energy-efficient than fixed-wing aircraft, which can fly greater distances and stay airborne for longer but need a runway, a dedicated launcher, or at least a good-fashioned throw to get to the skies.

To get past this limit, a team of Swiss researchers at the École Polytechnique Fédérale de Lausanne built a fixed-wing flying robot called RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) with a peculiar bio-inspired landing gear: a pair of robotic bird-like legs. “The RAVEN robot can walk, hop over obstacles, and do a jumping takeoff like real birds,” says Won Dong Shin, an engineer leading the project.

Smart investments

The key challenge in attaching legs to drones was that they significantly increased mass and complexity. State-of-the-art robotic legs were designed for robots walking on the ground and were too bulky and heavy to even think about using on a flying machine. So, Shin’s team started their work by taking a closer look at what the leg mass budget looked like in various species of birds.

It turned out that the ratio of leg mass to the total body weight generally increased with size in birds. A carrion crow had legs weighing around 100 grams, which the team took as their point of reference.

The robotic legs built by Shin and his colleagues resembled a real bird’s legs quite closely. Simplifications introduced to save weight included skipping the knee joint and actuated toe joints, resulting in a two-segmented limb with 64 percent of the weight placed around the hip joint. The mechanism was powered by a standard drone propeller, with the ankle joint actuated through a system of pulleys and a timing belt. The robotic leg ended with a foot with three forward-facing toes and a single backward-facing hallux.

There were some more sophisticated bird-inspired design features, too. “I embedded a torsional spring in the ankle joint. When the robot’s leg is crouching, it stores the energy in that spring, and then when the leg stretches out, the spring works together with the motor to generate higher jumping speed,” says Shin. A real bird can store elastic energy in its muscle-tendon system during flexion and release it very rapidly during extension for a jumping takeoff. The spring’s job was to emulate this mechanism, and it worked pretty well—“It actually increased the jumping speed by 25 percent,” Shin says.

In the end, the robotic legs weighed around 230 grams, way more than the real ones in a carrion crow, but it turned out that was good enough for the RAVEN robot to walk, jump, take off, and fly.

Crow’s efficiency

The team calculated the necessary takeoff speed for two birds with body masses of 490 grams and a hair over 780 grams; these were 1.85 and 3.21 meters per second, respectively. Based on that, Shin figured the RAVEN robot would need to reach 2.5 meters per second to get airborne. Using the bird-like jumping takeoff strategy, it could reach that speed in just 0.17 seconds.

How did nature’s go-to takeoff procedure stack up against other ways to get to the skies? Other options included a falling takeoff, where you just push your aircraft off a cliff and let gravity do its thing, or standing takeoff, where you position the craft vertically and rely on the propeller to lift it upward. “When I was designing the experiments, I thought the jumping takeoff would be the least energy-efficient because it used extra juice from the battery to activate the legs,” Shin says. But he was in for a surprise.

“What we meant by energy efficiency was calculating the energy input and energy output. The energy output was the kinetic energy and the potential energy at the moment of takeoff, defined as the moment when the feet of the robot stop touching the ground,” Shin explains. The energy input was calculated by measuring the power used during takeoff.

The RAVEN takes flight.

“It turned out that the jumping takeoff was actually the most energy-efficient strategy. I didn’t expect that result. It was quite surprising”, Shin says.

The energy cost of the jumping takeoff was slightly higher than that of the other two strategies, but not by much. It required 7.9 percent more juice than the standing takeoff and 6.9 percent more than the falling takeoff. At the same time, it generated much higher acceleration, so you got way better bang for the buck (at least as far as energy was concerned). Overall, jumping with bird-like legs was 9.7 times more efficient than standing takeoff and 4.9 times more efficient than falling takeoff.

One caveat with the team’s calculations was that a fixed-wing drone with a more conventional design, one using wheels or a launcher, would be much more efficient in traditional takeoff strategies than a legged RAVEN robot. “But when you think about it, birds, too, would fly much better without legs. And yet they need them to move on the ground or hunt their prey. You trade some of the in-flight efficiency for more functions,” Shin claims. And the legs offered plenty of functions.

Obstacles ahead

To demonstrate the versatility of their legged flying robot, Shin’s team put it through a series of tasks that would be impossible to complete with a standard drone. Their benchmark mission scenario involved traversing a path with a low ceiling, jumping over a gap, and hopping onto an obstacle. “Assuming an erect position with the tail touching the ground, the robot could walk and remain stable even without advanced controllers,” Shin claims. Walking solved the problem of moving under low ceilings. Jumping over gaps and onto obstacles was done by using the mechanism used for takeoff: torsion springs and actuators. RAVEN could jump over an 11-centimeter-wide gap and onto an obstacle 26-centimeter-high.

But Shin says RAVEN will need way more work before it truly shines. “At this stage, the robot cannot clear all those obstacles in one go. We had to reprogram it for each of the obstacles separately,” Shin says. The problem is the control system in RAVEN is not adaptive; the actuators in the legs perform predefined sets of motions to send the robot on a trajectory the team figured out through computer simulations. If there was something blocking the way, RAVEN would have crashed into it.

Another, perhaps more striking limitation is that RAVEN can’t use its legs to land. But this is something Shin and his colleagues want to work on in the future.

“We want to implement some sensors, perhaps vision or haptic sensors. This way, we’re going to know where the landing site is, how many meters away from it we are, and so on,” Shin says. Another modification that’s on the way for RAVEN is foldable wings that the robot will use to squeeze through tight spaces. “Flapping wings would also be a very interesting topic. They are very important for landing, too, because birds decelerate first with their wings, not with their legs. With flapping wings, this is going to be a really bird-like robot,” Shin claims.

All this is intended to prepare RAVEN for search and rescue missions. The idea is legged flying robots would reach disaster-struck areas quickly, land, traverse difficult terrain on foot if necessary, and then take off like birds. “Another application is delivering parcels. Here in Switzerland, I often see helicopters delivering them to people living high up in the mountains, which I think is quite costly. A bird-like drone could do that more efficiently,” Shin suggested.

Nature, 2024.  DOI: 10.1038/s41586-024-08228-9

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

New drone has legs for landing gear, enabling efficient launches Read More »

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

researchers-build-ultralight-drone-that-flies-with-onboard-solar

Researchers build ultralight drone that flies with onboard solar

Where does it go? It goes up! —

Bizarre design uses a solar-powered motor that’s optimized for weight.

Image of a metallic object composed from top to bottom of a propeller, a large cylinder with metallic panels, a stalk, and a flat slab with solar panels and electronics.

Enlarge / The CoulombFly doing its thing.

On Wednesday, researchers reported that they had developed a drone they’re calling the CoulombFly, which is capable of self-powered hovering for as long as the Sun is shining. The drone, which is shaped like no aerial vehicle you’ve ever seen before, combines solar cells, a voltage converter, and an electrostatic motor to drive a helicopter-like propeller—with all components having been optimized for a balance of efficiency and light weight.

Before people get excited about buying one, the list of caveats is extensive. There’s no onboard control hardware, and the drone isn’t capable of directed flight anyway, meaning it would drift on the breeze if ever set loose outdoors. Lots of the components appear quite fragile, as well. However, the design can be miniaturized, and the researchers built a version that weighs only 9 milligrams.

Built around a motor

One key to this development was the researchers’ recognition that most drones use electromagnetic motors, which involve lots of metal coils that add significant weight to any system. So, the team behind the work decided to focus on developing a lightweight electrostatic motor. These rely on charge attraction and repulsion to power the motor, as opposed to magnetic interactions.

The motor the researchers developed is quite large relative to the size of the drone. It consists of an inner ring of stationary charged plates called the stator. These plates are composed of a thin carbon-fiber plate covered in aluminum foil. When in operation, neighboring plates have opposite charges. A ring of 64 rotating plates surrounds that.

The motor starts operating when the plates in the outer ring are charged. Since one of the nearby plates on the stator will be guaranteed to have the opposite charge, the pull will start the rotating ring turning. When the plates of the stator and rotor reach their closest approach, thin wires will make contact, allowing charges to transfer between them. This ensures that the stator and rotor plates now have the same charge, converting the attraction to a repulsion. This keeps the rotor moving, and guarantees that the rotor’s plate now has the opposite charge from the next stator plate down the line.

These systems typically require very little in the way of amperage to operate. But they do require a large voltage difference between the plates (something we’ll come back to).

When hooked up to a 10-centimeter, eight-bladed propeller, the system could produce a maximum lift of 5.8 grams. This gave the researchers clear weight targets when designing the remaining components.

Ready to hover

The solar power cells were made of a thin film of gallium arsenide, which is far more expensive than other photovoltaic materials, but offers a higher efficiency (30 percent conversion compared to numbers that are typically in the mid-20s). This tends to provide the opposite of what the system needs: reasonable current at a relatively low voltage. So, the system also needed a high-voltage power converter.

Here, the researchers sacrificed efficiency for low weight, arranging a bunch of voltage converters in series to create a system that weighs just 1.13 grams, but steps the voltage up from 4.5 V all the way to 9.0 kV. But it does so with a power conversion efficiency of just 24 percent.

The resulting CoulombFly is dominated by the large cylindrical motor, which is topped by the propeller. Suspended below that is a platform with the solar cells on one side, balanced out by the long, thin power converter on the other.

Meet the CoulombFly.

To test their system, the researchers simply opened a window on a sunny day in Beijing. Starting at noon, the drone took off and hovered for over an hour, and all indications are that it would have continued to do so for as long as the sunlight provided enough power.

The total system required just over half a watt of power to stay aloft. Given a total mass of 4 grams, that works out to a lift-to-power efficiency of 7.6 grams per watt. But a lot of that power is lost during the voltage conversion. If you focus on the motor alone, it only requires 0.14 watts, giving it a lift-to-power efficiency of over 30 grams per watt.

The researchers provide a long list of things they could do to optimize the design, including increasing the motor’s torque and propeller’s lift, placing the solar cells on structural components, and boosting the efficiency of the voltage converter. But one thing they don’t have to optimize is the vehicle’s size since they already built a miniaturized version that’s only 8 millimeters high and weighs just 9 milligrams but is able to generate a milliwatt of power that turns its propeller at over 15,000 rpm.

Again, all this is done without any onboard control circuitry or the hardware needed to move the machine anywhere—they’re basically flying these in cages to keep them from wandering off on the breeze. But there seems to be enough leeway in the weight that some additional hardware should be possible, especially if they manage some of the potential optimizations they mentioned.

Nature, 2024. DOI: 10.1038/s41586-024-07609-4  (About DOIs).

Researchers build ultralight drone that flies with onboard solar Read More »

lightening-the-load:-ai-helps-exoskeleton-work-with-different-strides

Lightening the load: AI helps exoskeleton work with different strides

One model to rule them all —

A model trained in a virtual environment does remarkably well in the real world.

Image of two people using powered exoskeletons to move heavy items around, as seen in the movie Aliens.

Enlarge / Right now, the software doesn’t do arms, so don’t go taking on any aliens with it.

20th Century Fox

Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.

To bring the technology a bit closer to Avatar’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.

Tailor-made robots

An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.

Building those locomotion recognition systems currently relies on elaborate policies that define what actuators in an exoskeleton need to do in each possible scenario. “Let’s take walking. The current state of the art is we put the exoskeleton on you and you walk on a treadmill for an hour. Based on that, we try to adjust its operation to your individual set of movements,” Su explains.

Building handcrafted control policies and doing long human trials for each user makes exoskeletons super expensive, with prices reaching $200,000 or more. So, Su’s team used AI to automatically generate control policies and eliminate human training. “I think within two or three years, exoskeletons priced between $2,000 and $5,000 will be absolutely doable,” Su claims.

His team hopes these savings will come from developing the exoskeleton control policy using a digital model, rather than living, breathing humans.

Digitizing robo-aided humans

Su’s team started by building digital models of a human musculoskeletal system and an exoskeleton robot. Then they used multiple neural networks that operated each component. One was running the digitized model of a human skeleton, moved by simplified muscles. The second neural network was running the exoskeleton model. Finally, the third neural net was responsible for imitating motion—basically predicting how a human model would move wearing the exoskeleton and how the two would interact with each other. “We trained all three neural networks simultaneously to minimize muscle activity,” says Su.

One problem the team faced is that exoskeleton studies typically use a performance metric based on metabolic rate reduction. “Humans, though, are incredibly complex, and it is very hard to build a model with enough fidelity to accurately simulate metabolism,” Su explains. Luckily, according to the team, reducing muscle activations is rather tightly correlated with metabolic rate reduction, so it kept the digital model’s complexity within reasonable limits. The training of the entire human-exoskeleton system with all three neural networks took roughly eight hours on a single RTX 3090 GPU. And the results were record-breaking.

Bridging the sim-to-real gap

After developing the controllers for the digital exoskeleton model, which were developed by the neural networks in simulation, Su’s team simply copy-pasted the control policy to a real controller running a real exoskeleton. Then, they tested how an exoskeleton trained this way would work with 20 different participants. The averaged metabolic rate reduction in walking was over 24 percent, over 13 percent in running, and 15.4 percent in stair climbing—all record numbers, meaning their exoskeleton beat every other exoskeleton ever made in each category.

This was achieved without needing any tweaks to fit it to individual gaits. But the neural networks’ magic didn’t end there.

“The problem with traditional, handcrafted policies was that it was just telling it ‘if walking is detected do one thing; if walking faster is detected do another thing.’ These were [a mix of] finite state machines and switch controllers. We introduced end-to-end continuous control,” says Su. What this continuous control meant was that the exoskeleton could follow the human body as it made smooth transitions between different activities—from walking to running, from running to climbing stairs, etc. There was no abrupt mode switching.

“In terms of software, I think everyone will be using this neural network-based approach soon,” Su claims. To improve the exoskeletons in the future, his team wants to make them quieter, lighter, and more comfortable.

But the plan is also to make them work for people who need them the most. “The limitation now is that we tested these exoskeletons with able-bodied participants, not people with gait impairments. So, what we want to do is something they did in another exoskeleton study at Stanford University. We would take a one-minute video of you walking, and based on that, we would build a model to individualize our general model. This should work well for people with impairments like knee arthritis,” Su claims.

Nature, 2024.  DOI: 10.1038/s41586-024-07382-4

Lightening the load: AI helps exoskeleton work with different strides Read More »