Abstract
An online learning algorithm for reinforcement learning with continually running recurrent networks in nonstationary reactive environments is described. Various kinds of reinforcement are considered as special types of input to an agent living in the environment. The agent's only goal is to maximize the amount of reinforcement received over time. Supervised learning techniques for recurrent networks serve to construct a differentiable model of the environmental dynamics which includes a model of future reinforcement. This model is used for learning goal-directed behavior in an online fashion. The possibility of using the system for planning future action sequences is investigated, and this approach is compared to approaches based on temporal difference methods. A connection to metalearning (learning how to learn) is noted.
Original language | English (US) |
---|---|
Title of host publication | IJCNN. International Joint Conference on Neural Networks |
Publisher | Publ by IEEEPiscataway |
Pages | 253-258 |
Number of pages | 6 |
DOIs | |
State | Published - Jan 1 1990 |
Externally published | Yes |