TY - JOUR
T1 - Recurrent policy gradients
AU - Wierstra, Daan
AU - Förster, Alexander
AU - Peters, Jan
AU - Schmidhuber, Jürgen
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-14
PY - 2009/9/9
Y1 - 2009/9/9
N2 - Reinforcement learning for partially observable Markov decision problems (POMDPs) is a challenge as it requires policies with an internal state. Traditional approaches suffer significantly from this shortcoming and usually make strong assumptions on the problem domain such as perfect system models, state-estimators and a Markovian hidden system. Recurrent neural networks (RNNs) offer a natural framework for dealing with policy learning using hidden state and require only few limiting assumptions. As they can be trained well using gradient descent, they are suited for policy gradient approaches. In this paper, we present a policy gradient method, the Recurrent Policy Gradient which constitutes a model-free reinforcement learning method. It is aimed at training limited-memory stochastic policies on problems which require long-term memories of past observations. The approach involves approximating a policy gradient for a recurrent neural network by backpropagating return-weighted characteristic eligibilities through time. Using a "Long Short-Term Memory" RNN architecture, we are able to outperform previous RL methods on three important benchmark tasks. Furthermore, we show that using history-dependent baselines helps reducing estimation variance significantly, thus enabling our approach to tackle more challenging, highly stochastic environments. © The Author 2009. Published by Oxford University Press. All rights reserved.
AB - Reinforcement learning for partially observable Markov decision problems (POMDPs) is a challenge as it requires policies with an internal state. Traditional approaches suffer significantly from this shortcoming and usually make strong assumptions on the problem domain such as perfect system models, state-estimators and a Markovian hidden system. Recurrent neural networks (RNNs) offer a natural framework for dealing with policy learning using hidden state and require only few limiting assumptions. As they can be trained well using gradient descent, they are suited for policy gradient approaches. In this paper, we present a policy gradient method, the Recurrent Policy Gradient which constitutes a model-free reinforcement learning method. It is aimed at training limited-memory stochastic policies on problems which require long-term memories of past observations. The approach involves approximating a policy gradient for a recurrent neural network by backpropagating return-weighted characteristic eligibilities through time. Using a "Long Short-Term Memory" RNN architecture, we are able to outperform previous RL methods on three important benchmark tasks. Furthermore, we show that using history-dependent baselines helps reducing estimation variance significantly, thus enabling our approach to tackle more challenging, highly stochastic environments. © The Author 2009. Published by Oxford University Press. All rights reserved.
UR - https://academic.oup.com/jigpal/article-lookup/doi/10.1093/jigpal/jzp049
UR - http://www.scopus.com/inward/record.url?scp=77957283019&partnerID=8YFLogxK
U2 - 10.1093/jigpal/jzp049
DO - 10.1093/jigpal/jzp049
M3 - Article
SN - 1368-9894
VL - 18
SP - 620
EP - 634
JO - Logic Journal of the IGPL
JF - Logic Journal of the IGPL
IS - 5
ER -