Towards artificial general intelligence via a multimodal foundation model

Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen

Research output: Contribution to journalArticlepeer-review

101 Scopus citations

Abstract

The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of “weak or narrow AI” to that of “strong or generalized AI”.
Original languageEnglish (US)
JournalNature Communications
Volume13
Issue number1
DOIs
StatePublished - Jun 2 2022

ASJC Scopus subject areas

  • General Biochemistry, Genetics and Molecular Biology
  • General Chemistry
  • General Physics and Astronomy

Fingerprint

Dive into the research topics of 'Towards artificial general intelligence via a multimodal foundation model'. Together they form a unique fingerprint.

Cite this