Parameter-exploring policy gradients

Frank Sehnke, Christian Osendorfer, T. Rückstieß Thomas, Alex Graves, Jan Peters, Jürgen Schmidhuber

Research output: Contribution to journalArticlepeer-review

196 Scopus citations

Abstract

We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step. © 2009 Elsevier Ltd.
Original languageEnglish (US)
Pages (from-to)551-559
Number of pages9
JournalNeural Networks
Volume23
Issue number4
DOIs
StatePublished - May 1 2010
Externally publishedYes

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Parameter-exploring policy gradients'. Together they form a unique fingerprint.

Cite this