Honesty and trust revisited: the advantages of being neutral about other’s cognitive models

作者:Mario Gómez, Javier Carbó, Clara Benac Earle

摘要

Open distributed systems pose a challenge to trust modelling due to the dynamic nature of these systems (e.g., electronic auctions) and the unreliability of self-interested agents. The majority of trust models implicitly assume a shared cognitive model for all the agents participating in a society, and thus they treat the discrepancy between information and experience as a source of distrust: if an agent states a given quality of service, and another agent experiences a different quality for that service, such discrepancy is typically assumed to indicate dishonesty, and thus trust is reduced. Herein, we propose a trust model, which does not assume a concrete cognitive model for other agents, but instead uses the discrepancy between the information about other agents and its own experience to better predict the behavior of the others. This neutrality about other agents’ cognitive models allows an agent to obtain utility from lyres or agents having a different model of the world. The experiments performed suggest that this model improves the performance of an agent in dynamic scenarios under certain conditions such as those found in market-like evolving environments.

论文关键词:Trust, Reputation, ART-testbed

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10458-007-9015-8