Causally estimating the effect of YouTube's recommender system using counterfactual bots
the effects of the algorithm from a user's intentions. Here we propose a method that we call "counterfactual bots" to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with "counterfactual" bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender "forgets" their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.
hosseinmardi-et-al-2024-causally-estimating-the-effect-of-youtube-s-recommender-system-using-counterfactual-bots.pdf
publisher
openaccess
CC BY-NC-ND
17.82 MB
Adobe PDF
16b178d6d655b83ac0cf731727a3720b