Files

Abstract

Reputation-based trust models based on statistical learning have been intensively studied for large-scale distributed systems whereas practical application of game-theoretic approaches using sanctioning are still very little understood in such settings. This paper studies the relation between accuracy of such computational learning models and their ability to effectively enforce cooperation among rational agents as a result of their game-theoretic properties. We provide theoretical results that show under which conditions cooperation emerges when using trust learning algorithms with given accuracy and how cooperation can be still sustained while reducing cost and accuracy of those algorithms. Specifically, we used a computational trust model as a dishonesty detector to filter out unfair ratings and proved that such model with reasonable false positives and false negatives can effectively boost trust and cooperation in the system, assuming the rationality of participants. These results reveal two interesting observations: first, the key to the success of a reputation system in rational environment is not a particular sophisticated learning mechanism but an effective identity management scheme to prevent whitewashing behaviors. Second, in heterogeneous environment where peers use different learning algorithms with certain accuracy to learn trustworthiness of their potential partners, cooperation may also emerge. In other words, different computation trust models produce relative the same effects on rational participants in boosting trust and cooperation among them. We verify and extend these theoretical results to a variety of settings involving honest, malicious and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer, electronic commerce or community systems.

Details

Actions

Preview