Files

Abstract

We consider settings where a collective intelligence is formed by aggregating information contributed from many independent agents, such as product reviews, community sensing or opinion polls. To encourage participation and avoid selection bias, agents should be rewarded for the information they provide. It is important the the rewards provide incentives for relevant and truthful information and discourage random or malicious reports. Incentive schemes can be based on the fact that an agent's private information influences its beliefs about what other agents will report, and compute rewards based on comparing an agent's report with that of peer agents. Existing schemes require not only that all agents have the same prior belief, but also that they update these beliefs in an identical way. This assumption is unrealistic as agents may have very different perceptions of the accuracy of their own information. We have investigated a novel method, that we call the Peer Truth Serum (PTS), that works even when agents update their beliefs differently. It requires that the belief update from prior to posterior satisfies a self-predicting condition. It rewards agents with a payment of c/R(s) if their report s matches that of a randomly chosen reference agent, and nothing otherwise. R is the current distribution of reports that is maintained and published by the center collecting the information. We can show that as long as R is within a certain bound from agents' priors Pr, the reward scheme is truthful. Furthermore, as long as Pr is more informed than R, i.e. closer to the true distribution of private information, PTS incentivizes helpful reporting that still guarantees that R converges to the true distribution.

Details

Actions

Preview