Pixels to Graphs by Associative Embedding

Alejandro Newell, Jia Deng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph definition. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and demonstrate state-of-the-art performance on the challenging task of scene graph generation.
Original languageEnglish (US)
Title of host publication31st Annual Conference on Neural Information Processing Systems (NIPS)
PublisherNEURAL INFORMATION PROCESSING SYSTEMS (NIPS)
StatePublished - 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'Pixels to Graphs by Associative Embedding'. Together they form a unique fingerprint.

Cite this