Abstract
Learning to solve small instances of a problem should help in solving large instances. Unfortunately, most neural network architectures do not exhibit this form of scalability. Our Multi-Dimensional Recurrent LSTM Networks, however, show a high degree of scalability, as we empirically show in the domain of flexible-size board games. This allows them to be trained from scratch up to the level of human beginners, without using domain knowledge. © 2009 Springer Berlin Heidelberg.
Original language | English (US) |
---|---|
Title of host publication | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Pages | 1005-1014 |
Number of pages | 10 |
DOIs | |
State | Published - Nov 19 2009 |
Externally published | Yes |
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science