This four-legged robot learned parkour to better navigate obstacles

  News
image_pdfimage_print

ANYmal can do parkour and walk across rubble. The quadrupedal robot went back to school and has learned a lot.

Meet ANYmal, a four-legged dog-like robot designed by researchers at ETH Zürich in Switzerland, in hopes of using such robots for search-and-rescue on building sites or disaster areas, among other applications. Now ANYmal has been upgraded to perform rudimentary parkour moves, aka “free running.” Human parkour enthusiasts are known for their remarkably agile, acrobatic feats, and while ANYmal can’t match those, the robot successfully jumped across gaps, climbed up and down large obstacles, and crouched low to maneuver under an obstacle, according to a recent paper published in the journal Science Robotics.

The ETH Zürich team introduced ANYmal’s original approach to reinforcement learning back in 2019 and enhanced its proprioception (the ability to sense movement, action, and location) three years later. Just last year, the team showcased a trio of customized ANYmal robots, tested in environments as close to the harsh lunar and Martian terrain as possible. As previously reported, robots capable of walking could assist future rovers and mitigate the risk of damage from sharp edges or loss of traction in loose regolith. Every robot had a lidar sensor. but they were each specialized for particular functions and still flexible enough to cover for each other—if one glitches, the others can take over its tasks.

For instance, the Scout model’s main objective was to survey its surroundings using RGB cameras. This robot also used another imager to map regions and objects of interest using filters that let through different areas of the light spectrum. The Scientist model had the advantage of an arm featuring a MIRA (Metrohm Instant Raman Analyzer) and a MICRO (microscopic imager). The MIRA was able to identify chemicals in materials found on the surface of the demonstration area based on how they scattered light, while the MICRO on its wrist imaged them up close. The Hybrid was more of a generalist, helping out the Scout and the Scientist with measurements of scientific targets such as boulders and craters.

As advanced as ANYmal and similar-legged robots have become in recent years, significant challenges still remain before they are as nimble and agile as humans and other animals. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential,” said co-author Nikita Rudin, a graduate student at ETH Zurich who also does parkour. “But I had a different opinion. In fact, I was sure that a lot more could be done with the mechanics of legged robots.”

The quadrupedal robot ANYmal practices parkour in a hall at ETH Zürich.
Enlarge / The quadrupedal robot ANYmal practices parkour in a hall at ETH Zürich.
ETH Zurich / Nikita Rudin

Parkour is quite complex from a robotics standpoint, making it an ideal aspirational task for the Swiss team’s next step in ANYmal’s capabilities. Parkour can involve large obstacles, requiring the robot “to perform dynamic maneuvers at the limits of actuation while accurately controlling the motion of the base and limbs,” the authors wrote. To succeed, ANYmal must be able to sense its environment and adapt to rapid changes, selecting a feasible path and sequence of motions from its programmed skill set. And it has to do all that in real time with limited onboard computing.

The Swiss team’s overall approach combines machine learning with model-based control. They split the task into three interconnected components: a perception module that processes the data from onboard cameras and LiDAR to estimate the terrain; a locomotion module with a programmed catalog of movements to overcome specific terrains; and a navigation module that guides the locomotion module in selecting which skills to use to navigate different obstacles and terrain using intermediate commands.

Rudin, for example, used machine learning to teach ANYmal some new skills through trial and error, namely, scaling obstacles and figuring out how to climb up and jump back down from them. The robot’s camera and artificial neural network enable it to pick the best maneuvers based on its prior training. Another graduate student, Fabian Jenelten, used model-based control to teach ANYmal how to recognize and negotiate gaps in piles of rubble, augmented with machine learning so the robot could have more flexibility in applying known movement patterns to unexpected situations.

ANYmal on a civil defense training ground.
Enlarge / ANYmal on a civil defense training ground.
ETH Zurich / Fabian Jenelten

Among the tasks ANYmal was able to perform was jumping from one box to a neighboring box up to 1 meter away. This required the robot to approach the gap sideways, place its feet as close as possible to the edge, and then use three legs to jump while extending the fourth to land on the other box. It could then transfer two diagonal legs before bringing the final leg across the gap. This meant ANYmal could recover from any missteps and slippage by transferring its weight between the non-leaping legs.

ANYmal also was able to climb down from a 1-meter-high box to reach a target on the ground, as well as climbing up the box. It can also crouch down to reach a target on the other side of a narrow passage, lowering its base and adapting its gait accordingly. The team also tested ANYmal’s walking abilities, in which the robot successfully traversed stairs, slopes, random small obstacles and so forth.

ANYmal still has its limitations when it comes to navigating real-world environments, whether it be a parkour course or the debris of a collapsed building. For instance, the authors note that they have yet to test the scalability of their approach to more diverse and unstructured scenarios that incorporate a wider variety of obstacles; the robot was only tested in a few select scenarios. “It remains to be seen how well these different modules can generalize to completely new scenarios,” they wrote. The approach is also time-consuming since it requires eight neural networks that must be tuned separately, and some of the networks are interdependent, so changing one means changing and retraining the others as well.

Still, ANYmal “can now evolve in complex scenes where it must climb and jump on large obstacles while selecting a nontrivial path toward its target location,” the authors wrote. Thus, “by aiming to match the agility of free runners, we can better understand the limitations of each component in the pipeline from perception to actuation, circumvent those limits, and generally increase the capabilities of our robots.”

Science Robotics, 2024. DOI: 10.1126/scirobotics.adi7566  (About DOIs).

Listing image by ETH Zurich / Nikita Rudin

https://arstechnica.com/?p=2013031