Parsaeifard, BehnamSaadatnejad, SaeedLiu, YuejiangMordan, TaylorAlahi, Alexandre2021-12-092021-12-092021-12-09202110.1109/ICCVW54120.2021.00259https://infoscience.epfl.ch/handle/20.500.14299/183773Human pose forecasting involves complex spatiotemporal interactions between body parts (e.g., arms, legs, spine). State-of-the-art approaches use Long Short-Term Memories (LSTMs) or Variational AutoEncoders (VAEs) to solve the problem. Yet, they do not effectively predict human motions when both global trajectory and local pose movements exist. We propose to learn decoupled representations for the global and local pose forecasting tasks. We also show that it is better to stop the prediction when the uncertainty in human motion increases. Our forecasting model outperforms all existing methods on the pose forecasting benchmark to date by over 20%. The code is available online: https://github.com/vita-epfl/decoupled-pose-predictionMotion forecastingHuman pose predictionLong Short-Term MemoryDecoupled representationLearning Decoupled Representations for Human Pose Forecastingtext::conference output::conference proceedings::conference paper