RAID: a relation-augmented image descriptor

Paul Guerrero, Niloy J. Mitra, Peter Wonka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Scopus citations

Abstract

As humans, we regularly interpret scenes based on how objects are related, rather than based on the objects themselves. For example, we see a person riding an object X or a plank bridging two objects. Current methods provide limited support to search for content based on such relations. We present RAID, a relation-augmented image descriptor that supports queries based on inter-region relations. The key idea of our descriptor is to encode region-to-region relations as the spatial distribution of point-to-region relationships between two image regions. RAID allows sketch-based retrieval and requires minimal training data, thus making it suited even for querying uncommon relations. We evaluate the proposed descriptor by querying into large image databases and successfully extract nontrivial images demonstrating complex inter-region relations, which are easily missed or erroneously classified by existing methods. We assess the robustness of RAID on multiple datasets even when the region segmentation is computed automatically or very noisy.
Original languageEnglish (US)
Title of host publicationACM Transactions on Graphics
PublisherAssociation for Computing Machinery (ACM)
Pages1-12
Number of pages12
DOIs
StatePublished - Jul 11 2016

Fingerprint

Dive into the research topics of 'RAID: a relation-augmented image descriptor'. Together they form a unique fingerprint.

Cite this