TY - GEN
T1 - Going Beyond Linear Transformers with Recurrent Fast Weight Programmers
AU - Irie, Kazuki
AU - Schlag, Imanol
AU - Csordás, Róbert
AU - Schmidhuber, Jürgen
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Transformers with linearised attention (“linear Transformers”) have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the’90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary architecture. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.1
AB - Transformers with linearised attention (“linear Transformers”) have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the’90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary architecture. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.1
UR - http://www.scopus.com/inward/record.url?scp=85125029822&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85125029822
T3 - Advances in Neural Information Processing Systems
SP - 7703
EP - 7717
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -