Fichiers

Résumé

Learning motion control as a unified process of designing the reference trajectory and the controller is one of the most challenging problems in robotics. The complexity of the problem prevents most of the existing optimization algorithms from giving satisfactory results. While model-based algorithms like iterative linear-quadratic-Gaussian (iLQG) can be used to design a suitable controller for the motion control, their performance is strongly limited by the model accuracy. An inaccurate model may lead to degraded performance of the controller on the physical system. Although using machine learning approaches to learn the motion control on real systems have been proven to be effective, their performance depends on good initialization. To address these issues, this paper introduces a two-step algorithm which combines the proven performance of a model-based controller with a model-free method for compensating for model inaccuracy. The first step optimizes the problem using iLQG. Then, in the second step this controller is used to initialize the policy for our PI$^2$-01 reinforcement learning algorithm. This algorithm is a derivation of the PI$^2$ algorithm enabling more stable and faster convergence. The performance of this method is demonstrated both in simulation and experimental results.

Détails

Actions

Aperçu