Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making
Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.
journal.pcbi.1009070.pdf
Publisher's version
openaccess
CC BY
7.05 MB
Adobe PDF
b0c5fff09aa20418e7ae7750129409be
final_version.pdf
Publisher's version
openaccess
CC BY
4.16 MB
Adobe PDF
6d9dd1afa5a44ac0b19df2d9940db508