Value Functions Factorization With Latent State Information Sharing in Decentralized Multi-Agent Policy Gradients

Hanhan Zhou, Tian Lan, Vaneet Aggarwal

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

The use of centralized training and decentralized execution for value function factorization demonstrates the potential for addressing cooperative multi-agent reinforcement tasks. QMIX, one of the methods in this field, has emerged as the leading approach and showed superior performance on the StarCraft II micromanagement benchmark. Nonetheless, its monotonic mixing method of combining per-agent estimates in QMIX has limitations in representing joint action Q-values and may not provide enough global state information for accurately estimating single-agent value function, which can lead to suboptimal results. To this end, we present LSF-SAC, a novel framework that features a variational inference-based information-sharing mechanism as extra state information to assist individual agents in the value function factorization. We demonstrate that such latent individual state information sharing can significantly expand the power of value function factorization, while fully decentralized execution can still be maintained in LSF-SAC through a soft-actor-critic design. We evaluate LSF-SAC on the StarCraft II micromanagement challenge and demonstrate that it outperforms several state-of-the-art methods in challenging collaborative tasks. We further set extensive ablation studies for locating the key factors accounting for its performance improvements. We believe that this new insight can lead to new local value estimation methods and variational deep learning algorithms.
Original languageEnglish (US)
Pages (from-to)1-11
Number of pages11
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
DOIs
StatePublished - Jul 17 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Value Functions Factorization With Latent State Information Sharing in Decentralized Multi-Agent Policy Gradients'. Together they form a unique fingerprint.

Cite this