TY - JOUR
T1 - Nested-Wasserstein Distance for Sequence Generation
AU - Zhang, Ruiyi
AU - Chen, Changyou
AU - Gan, Zhe
AU - Wen, Zheng
AU - Wang, Wenlin
AU - Carin, Lawrence
PY - 2019
Y1 - 2019
N2 - Reinforcement learning (RL) has been widely studied for improving sequencegeneration models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore render model bias. Further, the sparse and delayed rewards make RL exploration inefficient. To alleviate these issues, we propose the concept of nested-Wasserstein distance for measuring the distance between two policy distributions. Based on this, a novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-rewarded sequences for deeper explorations and better semantic matching. Our solution can be understood as approximately executing proximal policy optimization with nested-Wasserstein trust-regions. Experiments on a variety of unconditional and conditional sequence-generation tasks demonstrate the proposed approach consistently leads to improved performance.
AB - Reinforcement learning (RL) has been widely studied for improving sequencegeneration models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore render model bias. Further, the sparse and delayed rewards make RL exploration inefficient. To alleviate these issues, we propose the concept of nested-Wasserstein distance for measuring the distance between two policy distributions. Based on this, a novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-rewarded sequences for deeper explorations and better semantic matching. Our solution can be understood as approximately executing proximal policy optimization with nested-Wasserstein trust-regions. Experiments on a variety of unconditional and conditional sequence-generation tasks demonstrate the proposed approach consistently leads to improved performance.
UR - https://www.mendeley.com/catalogue/dff691ed-42be-3cfc-87b6-68c10c500283/
M3 - Article
SP - 1
EP - 11
JO - NeurlPS 2019 workshop
JF - NeurlPS 2019 workshop
IS - NeurIPS
ER -