The human visual system computes features of moving objects with high precision despite the fact that these features can change or blend into each other in the retinotopic image. Very little is known about how the human brain accomplishes this complex feat. Using a Ternus-Pikler display, introduced by Gestalt psychologists about a century ago, we show that human observers can perceive features of moving objects at locations these features are not present. More importantly, our results indicate that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping. From a computational perspective, our data imply sophisticated real-time transformations of retinotopic relations in the visual cortex. Our results suggest that the human motion and form systems interact with each other to remap the retinotopic projection of the physical space in order to maintain the identity of moving objects in the perceptual space.