TY - JOUR
T1 - Parameter-exploring policy gradients
AU - Sehnke, Frank
AU - Osendorfer, Christian
AU - Rückstieß Thomas, T.
AU - Graves, Alex
AU - Peters, Jan
AU - Schmidhuber, Jürgen
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-14
PY - 2010/5/1
Y1 - 2010/5/1
N2 - We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step. © 2009 Elsevier Ltd.
AB - We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step. © 2009 Elsevier Ltd.
UR - https://linkinghub.elsevier.com/retrieve/pii/S0893608009003220
UR - http://www.scopus.com/inward/record.url?scp=77950297907&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2009.12.004
DO - 10.1016/j.neunet.2009.12.004
M3 - Article
SN - 0893-6080
VL - 23
SP - 551
EP - 559
JO - Neural Networks
JF - Neural Networks
IS - 4
ER -