TY - GEN
T1 - Learning One Abstract Bit at a Time Through Self-invented Experiments Encoded as Neural Networks
AU - Herrmann, Vincent
AU - Kirsch, Louis
AU - Schmidhuber, Jürgen
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2024
Y1 - 2024
N2 - There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Our artificial scientists not only learn to answer given questions, but also continually invent new questions, by proposing hypotheses to be verified or falsified through potentially complex and time-consuming experiments, including thought experiments akin to those of mathematicians. While an artificial scientist expands its knowledge, it remains biased towards the simplest, least costly experiments that still have surprising outcomes, until they become boring. We present an empirical analysis of the automatic generation of interesting experiments. In the first setting, we investigate self-invented experiments in a reinforcement-providing environment and show that they lead to effective exploration. In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks generated by a neural experiment generator. Initially interesting thought experiments may become boring over time.
AB - There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Our artificial scientists not only learn to answer given questions, but also continually invent new questions, by proposing hypotheses to be verified or falsified through potentially complex and time-consuming experiments, including thought experiments akin to those of mathematicians. While an artificial scientist expands its knowledge, it remains biased towards the simplest, least costly experiments that still have surprising outcomes, until they become boring. We present an empirical analysis of the automatic generation of interesting experiments. In the first setting, we investigate self-invented experiments in a reinforcement-providing environment and show that they lead to effective exploration. In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks generated by a neural experiment generator. Initially interesting thought experiments may become boring over time.
KW - Exploration
KW - Reinforcement Learning
UR - http://www.scopus.com/inward/record.url?scp=85177830238&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-47958-8_16
DO - 10.1007/978-3-031-47958-8_16
M3 - Conference contribution
AN - SCOPUS:85177830238
SN - 9783031479571
T3 - Communications in Computer and Information Science
SP - 254
EP - 274
BT - Active Inference - 4th International Workshop, IWAI 2023, Revised Selected Papers
A2 - Buckley, Christopher L.
A2 - Cialfi, Daniela
A2 - Lanillos, Pablo
A2 - Ramstead, Maxwell
A2 - Verbelen, Tim
A2 - Ramstead, Maxwell
A2 - Sajid, Noor
A2 - Shimazaki, Hideaki
A2 - Wisse, Martijn
PB - Springer Science and Business Media Deutschland GmbH
T2 - 4th International Workshop on Active Inference, IWAI 2023
Y2 - 13 September 2023 through 15 September 2023
ER -