TY - JOUR
T1 - Unsupervised Learning of Temporal Abstractions with Slot-Based Transformers
AU - Gopalakrishnan, Anand
AU - Irie, Kazuki
AU - Schmidhuber, Juergen
AU - van Steenkiste, Sjoerd
N1 - KAUST Repository Item: Exported on 2023-02-09
Acknowledgements: We thank Aditya Ramesh, Aleksandar Stanic and Klaus Greff for useful discussions and ´ valuable feedback. The large majority of this research was funded by Swiss National Science Foundation grant: 200021 192356, project NEUSYM. This work was also supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID s1023 and s1154. We also thank NVIDIA Corporation for donating DGX machines as part of the Pioneers of AI Research Award.
PY - 2023/2/2
Y1 - 2023/2/2
N2 - The discovery of reusable subroutines simplifies decision making and planning in complex reinforcement learning problems. Previous approaches propose to learn such temporal abstractions in an unsupervised fashion through observing state-action trajectories gathered from executing a policy. However, a current limitation is that they process each trajectory in an entirely sequential manner, which prevents them from revising earlier decisions about subroutine boundary points in light of new incoming information. In this work, we propose slot-based transformer for temporal abstraction (SloTTAr), a fully parallel approach that integrates sequence processing transformers with a slot attention module to discover subroutines in an unsupervised fashion while leveraging adaptive computation for learning about the number of such subroutines solely based on their empirical distribution. We demonstrate how SloTTAr is capable of outperforming strong baselines in terms of boundary point discovery, even for sequences containing variable amounts of subroutines, while being up to seven times faster to train on existing benchmarks.
AB - The discovery of reusable subroutines simplifies decision making and planning in complex reinforcement learning problems. Previous approaches propose to learn such temporal abstractions in an unsupervised fashion through observing state-action trajectories gathered from executing a policy. However, a current limitation is that they process each trajectory in an entirely sequential manner, which prevents them from revising earlier decisions about subroutine boundary points in light of new incoming information. In this work, we propose slot-based transformer for temporal abstraction (SloTTAr), a fully parallel approach that integrates sequence processing transformers with a slot attention module to discover subroutines in an unsupervised fashion while leveraging adaptive computation for learning about the number of such subroutines solely based on their empirical distribution. We demonstrate how SloTTAr is capable of outperforming strong baselines in terms of boundary point discovery, even for sequences containing variable amounts of subroutines, while being up to seven times faster to train on existing benchmarks.
UR - http://hdl.handle.net/10754/676051
UR - https://direct.mit.edu/neco/article/doi/10.1162/neco_a_01567/114732/Unsupervised-Learning-of-Temporal-Abstractions
U2 - 10.1162/neco_a_01567
DO - 10.1162/neco_a_01567
M3 - Article
C2 - 36746145
SN - 0899-7667
SP - 1
EP - 34
JO - Neural Computation
JF - Neural Computation
ER -