`Introspective' network that can learn to run its own weight change algorithm

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Usually weight changes in neural networks are exclusively caused by some hard-wired learning algorithm with many specific limitations. I show that it is in principle possible to let the network run and improve its own weight change algorithm (without significant theoretical limits). I derive an initial gradient-based supervised sequence learning algorithm for an `introspective' recurrent network that can `speak' about its own weight matrix in terms of activations. It uses special subjects of its input and output units for observing its own errors and for explicitly analyzing and manipulating all of its own weights, including those weights responsible for analyzing and manipulating weights. The result is the first `self-referential' neural network with explicit potential control over all adaptive parameters governing its behaviour.
Original languageEnglish (US)
Title of host publicationIEE Conference Publication
PublisherPubl by IEEStevenage, United Kingdom
Pages191-194
Number of pages4
StatePublished - Jan 1 1993
Externally publishedYes

Fingerprint

Dive into the research topics of '`Introspective' network that can learn to run its own weight change algorithm'. Together they form a unique fingerprint.

Cite this