Abstract
We consider a scenario in which interacting agents cooperate through an iterative process of 1) forming empirical models of the behavior of other agents and 2) selfishly optimizing a local strategy based on these models. In each iteration, an agent revises its models of other agents. Selfish optimization according to these revised models alters the behavior of a each agent. This, in turn, leads to a new round of revised models of other agents. The implication of convergence is a consistency condition. Namely, each agent's behavior is consistent with how the agent is modeled by others. Furthermore, each agent's local strategy is optimal with respect to how it models other agents. We consider a particular instance of this framework that is motivated by the "Roboflag drill" coordination scenario. This paper derives conditions for convergence, provides illustrative simulations, and establishes a connection to related work in evolutionary games.
Original language | English (US) |
---|---|
Title of host publication | 9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06 |
DOIs | |
State | Published - 2006 |
Externally published | Yes |
Event | 9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06 - Singapore, Singapore Duration: Dec 5 2006 → Dec 8 2006 |
Other
Other | 9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 12/5/06 → 12/8/06 |
ASJC Scopus subject areas
- Computer Science Applications
- Computer Vision and Pattern Recognition
- Control and Systems Engineering