TY - JOUR
T1 - Curiosity driven reinforcement learning for motion planning on humanoids
AU - Frank, Mikhail
AU - Leitner, Jurgen
AU - Stollenga, Marijn
AU - Forster, Alexander
AU - Schmidhuber, Jurgen
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-14
PY - 2014/1/1
Y1 - 2014/1/1
N2 - Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.
AB - Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment.
UR - http://journal.frontiersin.org/article/10.3389/fnbot.2013.00025/abstract
UR - http://www.scopus.com/inward/record.url?scp=84995579611&partnerID=8YFLogxK
U2 - 10.3389/fnbot.2013.00025
DO - 10.3389/fnbot.2013.00025
M3 - Article
SN - 1662-5218
VL - 7
JO - Frontiers in Neurorobotics
JF - Frontiers in Neurorobotics
IS - JAN
ER -