Wang, JiayiKim, SanghyunLembono, Teguh SantosoDu, WenqianShim, JaehyunSamadi, SaeidWang, KeIvan, VladimirCalinon, SylvainVijayakumar, SethuTonneau, Steve2024-06-052024-06-052024-06-052024-01-0110.1109/TRO.2024.3392154https://infoscience.epfl.ch/handle/20.500.14299/208307WOS:001218701500002Planning multicontact motions in a receding horizon fashion requires a value function to guide the planning with respect to the future, e.g., building momentum to traverse large obstacles. Traditionally, the value function is approximated by computing trajectories in a prediction horizon (never executed) that foresees the future beyond the execution horizon. However, given the nonconvex dynamics of multicontact motions, this approach is computationally expensive. To enable online receding horizon planning (RHP) of multicontact motions, we find efficient approximations of the value function. Specifically, we propose a trajectory-based and a learning-based approach. In the former, namely RHP with multiple levels of model fidelity, we approximate the value function by computing the prediction horizon with a convex relaxed model. In the latter, namely locally guided RHP, we learn an oracle to predict local objectives for locomotion tasks, and we use these local objectives to construct local value functions for guiding a short-horizon RHP. We evaluate both approaches in simulation by planning centroidal trajectories of a humanoid robot walking on moderate slopes, and on large slopes where the robot cannot maintain static balance. Our results show that locally guided RHP achieves the best computation efficiency (95%-98.6% cycles converge online). This computation advantage enables us to demonstrate online RHP of our real-world humanoid robot Talos walking in dynamic environments that change on-the-fly.TechnologyHumanoid RobotsLegged LocomotionMulticontact LocomotionOptimization And Optimal ControlOnline Multicontact Receding Horizon Planning via Value Function Approximationtext::journal::journal article::research article