Ewerton, MarcoCalinon, SylvainOdobez, Jean-Marc2021-04-132021-04-132021-04-132021https://infoscience.epfl.ch/handle/20.500.14299/177282Humans effortlessly solve push tasks in everyday life but unlocking these capabilities remains a research challenge in robotics. Physical models are often inaccurate or unattainable. State-of-the-art data-driven approaches learn to compensate for these inaccuracies or get rid of the approximated physical models altogether. Nevertheless, data-driven approaches such as Deep Q-Networks (DQNs) get frequently stuck in local optima in large state-action spaces. We propose an attention mechanism for DQNs to improve their sampling efficiency and demonstrate in simulation experiments with a UR5 robot arm that such a mechanism helps the DQN learn faster and achieve higher performance in a push task involving objects with unknown dynamics.An Attention Mechanism for Deep Q-Networks with Applications in Robotic Pushingtext::report