A Model For Trust And Reputation: competence, integrity, and forgiveness in multi-agent systems


Monday, November 21, 2005, 14:30pm - Monday, November 21, 2005, 16:30pm

325b ITE

agent, coordination, reputation, trust

Autonomous agents in heterogeneous, open systems face the difficult problem of establishing and maintaining beneficial relationships. Open social networks -- whether composed of humans, machines, or some combination -- give rise to a need for modeling inter-agent trust and reputation. A means to guard against other agents failing to meet their commitments is crucial. Such environments typically have no effective mechanism for authority or enforcement. Lacking an enforcement mechanism, a corresponding need arises for the agents to predict the likely intentions behind, and outcomes of, potential joint actions. In these societies, individual agents must make decisions about forming teams, committing, and taking actions. To make these decisions, agents must estimate how well potential partners will honor their commitments and succeed at their tasks.

I propose to investigate the issues of coordinating agent interactions by developing a framework that explicitly analyzes key components of trust, using a decision-theoretic approach. These components of trust include competence, integrity, forgiveness, and forgetfulness. The proposed thesis will show how agents can effectively employ techniques borrowed from non-cooperative game theory to induce models about other agents and then apply that learned knowledge to make decisions online.

Committee Members: Marie desJardins (Chairperson), Tim Finin, Michael Littman, Tim Oates and Lina Zhou

Marie desJardins

OWL Tweet