Improving Stateful Premise Selection with Transformers

Krsto Proroković, Michael Wand, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Premise selection is a fundamental task for automated reasoning in large theories. A recently proposed approach formulates premise selection as a sequence-to-sequence problem, called stateful premise selection. Given a theorem statement, the goal of a stateful premise selection method is to predict the set of premises that would be useful in proving it. In this work we use the Transformer architecture for learning the stateful premise selection method. We outperform the existing recurrent neural network baseline and improve upon the state of the art on a recently proposed dataset.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Science and Business Media Deutschland GmbH
Pages84-89
Number of pages6
ISBN (Print)9783030810962
DOIs
StatePublished - Jan 1 2021
Externally publishedYes

Fingerprint

Dive into the research topics of 'Improving Stateful Premise Selection with Transformers'. Together they form a unique fingerprint.

Cite this