Files

Abstract

Humans have many redundancies in their bodies and can make effective use of them to adapt to changes in the environment while walking. They can also vary their walking speed in a wide range. Human-like walking in simulation or by robots can be achieved through imitation learning. However, the walking speed is typically limited to a scale similar to the examples used for imitation. Achieving efficient and adaptable locomotion controllers for a wide range from walking to running is quite challenging. We propose a novel approach named adaptive imitated central pattern generators (AI-CPG) that combines central pattern generators (CPGs) and deep reinforcement learning (DRL) to enhance humanoid locomotion. Our method involves training a CPG-like controller through imitation learning, generating rhythmic feedforward activity patterns. DRL is not used for CPG parameter tuning; instead, it is applied in forming a reflex neural network, which can adjust feedforward patterns based on sensory feedback, enabling the stable body balancing to adapt to environmental or target velocity changes. Experiments with a 28-degree-of-freedom humanoid in a simulated environment demonstrated that our approach outperformed existing methods in terms of adaptability, balancing ability, and energy efficiency even for uneven surfaces. This study contributes to develop versatile humanoid locomotions in diverse environments.

Details

PDF