Game-Theoretic Learning in Distributed Control

Jason R. Marden, Jeff S. Shamma

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.
Original languageEnglish (US)
Title of host publicationHandbook of Dynamic Game Theory
PublisherSpringer Nature
Pages1-36
Number of pages36
ISBN (Print)9783319273358
DOIs
StatePublished - Jul 12 2017

Fingerprint

Dive into the research topics of 'Game-Theoretic Learning in Distributed Control'. Together they form a unique fingerprint.

Cite this