Analyzing Trust Management Impact to Optimal Adversarial Attacks in Recommender Systems

Ranking systems, such as those in product review sites, and various recommender systems usually employ user ratings to rank favorite items based on their quality and/or popularity. Since higher ranked items are more likely to be selected by customers and thus yield more revenues for their providers, the latter with unpopular and low quality items have strong incentives to try to strategically manipulate their ranking. However, rank manipulation attempts are also costly, especially in the presence of trust management mechanisms. This paper analyzes the cost-effectiveness of adversary attack strategies for manipulating these rankings from various perspectives and identifies the optimal adversary strategies. Particularly, we analyze the probability of a successful attack by an adversary in relation with (1) its computational and financial power to generate a certain number of fake identities and insert dishonest ratings to the systems, and (2) the capability of the trust mechanism employed to detect and eliminate malicious ratings. By means of extensive simulation experiments, we also estimate the impacts of the trust management mechanisms to the attack costs in a variety of settings and compare the probabilities of success under different attack strategies of the adversary to confirm our theoretical results. Additionally, we provide a preliminary analysis and experimental evaluation of the effectiveness of adversary attacks in case of multiple rival adversaries. Our research provides a new insight to the objective-oriented design of trust-based ranking systems and their attack resilience from an economic perspective.


Year:
2010
Keywords:
Note:
Submitted to the ACM Transactions on Intelligent Systems and Technology, Special Issue on Social Recommender Systems, 2010
Laboratories:


Note: The status of this file is: EPFL only


 Record created 2010-09-10, last modified 2018-09-13

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)