Files

Abstract

Ranking systems such as those in product comparison sites and recommender systems usually use ratings to rank favorite items based on both their quality and popularity. Since higher ranked items are more likely selected and yield more revenues for their owners, providers of unpopular and bad items have strong incentives to manipulate the ranking in favor of their own items. This paper analyzes the adversary cost for manipulating these rankings in a variety of scenarios. Particularly, we analyze and compare the adversarial cost to attack ranking systems that use various trust measures to detect and eliminate malicious ratings to systems that use no such trust mechanism. We provide theoretical results showing the relation between the capability of the trust mechanism in detecting malicious ratings and the minimum adversary cost for successfully changing the ranking. Furthermore, we study the impact of sharing trust information between ranking systems to the adversarial cost. It is proved that sharing information between two ranking systems on common user identities and malicious behaviors detected can significantly increase the adversarial cost to attack any of them under certain assumptions. %This holds especially because creating identities (or hiring people) is costly and the adversary may need to reuse a number of malicious users across systems to save the total cost of attacks. Our results are numerically evaluated showing that the estimated adversary cost for manipulating the item ranking can be made significant when proper trust mechanisms are employed or combined.

Details

Actions

Preview