Distributed dynamic reinforcement of efficient outcomes in multiagent coordination

Georgios C. Chasparis, Jeff S. Shamma

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations


We consider the problem of achieving distributed convergence to coordination in a multiagent environment. Each agent is modeled as a learning automaton which repeatedly interacts with an unknown environment, receives a reward, and updates the probabilities of its next action based on its own previous actions and received rewards. In this class of problems, more than one stable equilibrium (i.e., coordination structure) exists. We analyze the dynamic behavior of the distributed system in terms of convergence to an efficient equilibrium, suitably defined. In particular, we analyze the effect of dynamic processing on convergence properties, where agents include the derivative of their own reward into the decision process (i.e., derivative action). We show that derivative action can be used as an equilibrium selection scheme by appropriately adjusting derivative feedback gains.

Original languageEnglish (US)
Title of host publication2007 European Control Conference, ECC 2007
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages8
ISBN (Electronic)9783952417386
StatePublished - 2007
Externally publishedYes
Event2007 9th European Control Conference, ECC 2007 - Kos, Greece
Duration: Jul 2 2007Jul 5 2007

Publication series

Name2007 European Control Conference, ECC 2007


Other2007 9th European Control Conference, ECC 2007

ASJC Scopus subject areas

  • Control and Systems Engineering


Dive into the research topics of 'Distributed dynamic reinforcement of efficient outcomes in multiagent coordination'. Together they form a unique fingerprint.

Cite this