US Humanoid Robots Achieve Human-Like Walking With AI Training
US robotics firm Figure has unveiled a breakthrough in humanoid robot movement, showcasing its Figure 02 robots walking with a natural gait. A newly released video demonstrates the robots performing heel strikes, toe-offs, and synchronised arm swings, mimicking human locomotion.

Figure developed this capability using reinforcement learning (RL), an artificial intelligence approach that enables robots to learn through trial and error. The company trained its RL controller in a high-fidelity physics simulation, compressing years of data into hours. Thousands of virtual humanoids, each with different physical parameters, were tested in parallel.
The simulation exposed the robots to various real-world conditions, including different terrains, actuator dynamics, and challenges such as slips and shoves. This extensive training allowed Figure to develop a single neural network policy that governs the robots’ movements.
A key advantage of Figure’s approach is its ability to transfer the trained policy directly from simulation to real-world robots without additional tuning, a process known as "zero-shot" transfer. This ensures the robots can walk naturally in different environments without requiring further adjustments.
Figure 02 robots now demonstrate human-like walking patterns by following RL-driven training that rewards them for mimicking human movement while optimising velocity tracking, energy efficiency, and robustness. The company showcased ten Figure 02 robots operating on the same RL neural network without modifications, highlighting the scalability of its technology.
In February, Figure introduced Helix, a Vision-Language-Action (VLA) model that integrates perception, language understanding, and control to enhance robotic capabilities. The firm expects 2025 to be a pivotal year as it begins production, increases robot shipments, and advances in home robotics.
Figure is positioning itself as a key competitor in the humanoid robotics sector, alongside Tesla’s Optimus, Agility Robotics’ Digit, and Chinese firms such as UBTech Robotics and Unitree Robotics.
Figure 02 robots now walk with human-like heel strikes, toe-offs, and arm swings.
The company used reinforcement learning to train robots in a physics simulation.
The trained walking policy transfers directly to real-world robots without extra tuning.
Source: INTERESTING ENGINEERING