A Generic Trust Framework For Large-Scale Open Systems Using Machine Learning

In many large-scale distributed systems and on the Web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable interaction environment. A traditional approach to reason about the risk of a transaction is to determine if the involved agent is trustworthy on the basis of its behavior history. As a departure from such traditional trust models, we propose a generic, trust framework based on machine learning where an agent uses its own previous transactions (with other agents) to build a personal knowledge base. This is used to assess the trustworthiness of a transaction on the basis of the associated features, particularly using the features that help discern successful transactions from unsuccessful ones. These features are handled by applying appropriate machine learning algorithms to extract the relationships between the potential transaction and the previous ones. Experiments based on real data sets show that our approach is more accurate than other trust mechanisms, especially when the information about past behavior of the specific agent is rare, incomplete, or inaccurate.

Published in:
Computational Intelligence, 30, 4, 700-721
Hoboken, Wiley-Blackwell

 Record created 2014-12-30, last modified 2018-03-17

Rate this document:

Rate this document:
(Not yet reviewed)