Files

Abstract

Reputation is a well established means to determine trustworthiness in online systems in various contexts, e.g. online transactions, product recommendation, e-mail spam fighting, etc. However, typically these reputation systems are "closed" outside of the community: the set of participants, their possible actions, their evaluation and the mechanism to derive trust evaluations are predetermined in the system design. Therefore, existing information is hardly reused and emerging online communities have a "cold" start regarding trustworthiness. In this paper, we discuss the various opportunities that arise by combining reputation information from different communities and provide a detail discussion on related challenges, namely identification, mapping of reputation semantics, contextual distance, reputation disclosure (dis)incentives and privacy. For example, the critical issue of identification can more effectively be dealt with based on entity-matching and the social structure of different systems. Furthermore, we argue and theoretically prove that even naive combinations of reputation values from different communities can result in a system capable of detecting misbehavior more effectively than the individual reputation mechanisms themselves under certain conditions on reputation semantics.

Details

Actions

Preview