Unsupervised Learning of Temporal Abstractions with Slot-Based Transformers

Anand Gopalakrishnan, Kazuki Irie, Juergen Schmidhuber, Sjoerd van Steenkiste

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

The discovery of reusable subroutines simplifies decision making and planning in complex reinforcement learning problems. Previous approaches propose to learn such temporal abstractions in an unsupervised fashion through observing state-action trajectories gathered from executing a policy. However, a current limitation is that they process each trajectory in an entirely sequential manner, which prevents them from revising earlier decisions about subroutine boundary points in light of new incoming information. In this work, we propose slot-based transformer for temporal abstraction (SloTTAr), a fully parallel approach that integrates sequence processing transformers with a slot attention module to discover subroutines in an unsupervised fashion while leveraging adaptive computation for learning about the number of such subroutines solely based on their empirical distribution. We demonstrate how SloTTAr is capable of outperforming strong baselines in terms of boundary point discovery, even for sequences containing variable amounts of subroutines, while being up to seven times faster to train on existing benchmarks.
Original languageEnglish (US)
Pages (from-to)1-34
Number of pages34
JournalNeural Computation
DOIs
StatePublished - Feb 2 2023

ASJC Scopus subject areas

  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Unsupervised Learning of Temporal Abstractions with Slot-Based Transformers'. Together they form a unique fingerprint.

Cite this