Allocating images and selecting image collections for distributed visual search

Bing Li, Ling Yu Duan, Jie Lin, Tiejun Huang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

To improve query throughput, distributed image retrieval has been widely used to address the large scale visual search. In textual retrieval, the state-of-the-art approaches attempt to partition a textual database into multiple collections offline and allocate each collection to a server node. For each incoming query, just a few relevant collections are selected to search without seriously sacrificing retrieval accuracy, which enables sever nodes to process multiple queries concurrently. Unlike text retrieval, distributed visual search poses challenges in optimally allocating images and selecting image collections, due to the lack of semantic meanings in Bag of Words (BoW) based representation. In this paper, we propose a novel Semantics Related Distributed Visual Search (SRDVS) model. We employ Latent Dirichlet Allocation (LDA) [2] to discover the latent concepts as an intermediate semantic representation over a large scale image database. We aim to learn an optimal image allocation for each server node and accurately perform collection selection for each query. Experimental results over a million scale image database have demonstrated encouraging performance over state-of-the-art approaches. On average 6% collections are selected, which however yields promising retrieval performance comparable to the exhaustive search over the whole database. Copyright © 2012 ACM.
Original languageEnglish (US)
Title of host publicationACM International Conference Proceeding Series
Pages127-131
Number of pages5
DOIs
StatePublished - Nov 19 2012
Externally publishedYes

Fingerprint

Dive into the research topics of 'Allocating images and selecting image collections for distributed visual search'. Together they form a unique fingerprint.

Cite this