Suresh, AswinWu, Chi HsuanGrossglauser, Matthias2024-02-042024-02-042024-02-042024-03-17https://infoscience.epfl.ch/handle/20.500.14299/203453We propose an interpretable model to score the subjective bias present in documents, based only on their textual content. Our model is trained on pairs of revisions of the same Wikipedia article, where one version is more biased than the other. Although prior approaches based on bias classification have struggled to obtain a high accuracy for the task, we are able to develop a useful model for scoring bias by learning to accurately perform pairwise comparisons. We show that we can interpret the parameters of the trained model to discover the words most indicative of bias. We also apply our model in three different settings by studying the temporal evolution of bias in Wikipedia articles, comparing news sources based on bias, and scoring bias in law amendments. In each case, we demonstrate that the outputs of the model can be explained and validated, even for the two domains that are outside the training-data domain. We also use the model to compare the general level of bias between domains, where we see that legal texts are the least biased and news media are the most biased, with Wikipedia articles in between.Wikipediabiasnatural language processinginterpretable modelspairwise comparisonsdiscrete choice modelsIt’s All Relative: Learning Interpretable Models for Scoring Subjective Bias in Documents from Pairwise Comparisonstext::conference output::conference paper not in proceedings