TY - JOUR
T1 - SMIX(λ)
T2 - Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning
AU - Yao, Xinghu
AU - Wen, Chao
AU - Wang, Yuhui
AU - Tan, Xiaoyang
N1 - Funding Information:
This work was supported in part by the National Science Foundation of China under Grant 61976115 and Grant 61732006, in part by the AI+ Project of the Nanjing University of Aeronautics and Astronautics (NUAA) under Grant XZA20005 and Grant 56XZA18009, in part by the Research Project under Grant 315025305, and in part by the Graduate Innovation Foundation of NUAA under Grant Kfjj20191608.
Publisher Copyright:
© 2012 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multiagent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios. This article proposes an approach, named SMIX, that uses an OFF-policy training to achieve this by avoiding the greedy assumption commonly made in CVF learning. As importance sampling for such OFF-policy training is both computationally costly and numerically unstable, we proposed to use the -return as a proxy to compute the temporal difference (TD) error. With this new loss function objective, we adopt a modified QMIX network structure as the base to train our model. By further connecting it with the Q approach from a unified expectation correction viewpoint, we show that the proposed SMIXis equivalent to Q and hence shares its convergence properties, while without being suffered from the aforementioned curse of dimensionality problem inherent in MARL. Experiments on the StarCraft Multiagent Challenge (SMAC) benchmark demonstrate that our approach not only outperforms several state-of-the-art MARL methods by a large margin but also can be used as a general tool to improve the overall performance of other centralized training with decentralized execution (CTDE)-type algorithms by enhancing their CVFs.
AB - Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multiagent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios. This article proposes an approach, named SMIX, that uses an OFF-policy training to achieve this by avoiding the greedy assumption commonly made in CVF learning. As importance sampling for such OFF-policy training is both computationally costly and numerically unstable, we proposed to use the -return as a proxy to compute the temporal difference (TD) error. With this new loss function objective, we adopt a modified QMIX network structure as the base to train our model. By further connecting it with the Q approach from a unified expectation correction viewpoint, we show that the proposed SMIXis equivalent to Q and hence shares its convergence properties, while without being suffered from the aforementioned curse of dimensionality problem inherent in MARL. Experiments on the StarCraft Multiagent Challenge (SMAC) benchmark demonstrate that our approach not only outperforms several state-of-the-art MARL methods by a large margin but also can be used as a general tool to improve the overall performance of other centralized training with decentralized execution (CTDE)-type algorithms by enhancing their CVFs.
KW - Deep reinforcement learning (DRL)
KW - multiagent reinforcement learning (MARL)
KW - multiagent systems
KW - StarCraft Multiagent Challenge (SMAC)
UR - http://www.scopus.com/inward/record.url?scp=85112192226&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2021.3089493
DO - 10.1109/TNNLS.2021.3089493
M3 - Article
C2 - 34181556
AN - SCOPUS:85112192226
SN - 2162-237X
VL - 34
SP - 52
EP - 63
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 1
ER -