Abstract
Usually weight changes in neural networks are exclusively caused by some hard-wired learning algorithm with many specific limitations. I show that it is in principle possible to let the network run and improve its own weight change algorithm (without significant theoretical limits). I derive an initial gradient-based supervised sequence learning algorithm for an `introspective' recurrent network that can `speak' about its own weight matrix in terms of activations. It uses special subjects of its input and output units for observing its own errors and for explicitly analyzing and manipulating all of its own weights, including those weights responsible for analyzing and manipulating weights. The result is the first `self-referential' neural network with explicit potential control over all adaptive parameters governing its behaviour.
Original language | English (US) |
---|---|
Title of host publication | IEE Conference Publication |
Publisher | Publ by IEEStevenage, United Kingdom |
Pages | 191-194 |
Number of pages | 4 |
State | Published - Jan 1 1993 |
Externally published | Yes |