TY - GEN
T1 - Allocating images and selecting image collections for distributed visual search
AU - Li, Bing
AU - Duan, Ling Yu
AU - Lin, Jie
AU - Huang, Tiejun
N1 - Generated from Scopus record by KAUST IRTS on 2023-10-22
PY - 2012/11/19
Y1 - 2012/11/19
N2 - To improve query throughput, distributed image retrieval has been widely used to address the large scale visual search. In textual retrieval, the state-of-the-art approaches attempt to partition a textual database into multiple collections offline and allocate each collection to a server node. For each incoming query, just a few relevant collections are selected to search without seriously sacrificing retrieval accuracy, which enables sever nodes to process multiple queries concurrently. Unlike text retrieval, distributed visual search poses challenges in optimally allocating images and selecting image collections, due to the lack of semantic meanings in Bag of Words (BoW) based representation. In this paper, we propose a novel Semantics Related Distributed Visual Search (SRDVS) model. We employ Latent Dirichlet Allocation (LDA) [2] to discover the latent concepts as an intermediate semantic representation over a large scale image database. We aim to learn an optimal image allocation for each server node and accurately perform collection selection for each query. Experimental results over a million scale image database have demonstrated encouraging performance over state-of-the-art approaches. On average 6% collections are selected, which however yields promising retrieval performance comparable to the exhaustive search over the whole database. Copyright © 2012 ACM.
AB - To improve query throughput, distributed image retrieval has been widely used to address the large scale visual search. In textual retrieval, the state-of-the-art approaches attempt to partition a textual database into multiple collections offline and allocate each collection to a server node. For each incoming query, just a few relevant collections are selected to search without seriously sacrificing retrieval accuracy, which enables sever nodes to process multiple queries concurrently. Unlike text retrieval, distributed visual search poses challenges in optimally allocating images and selecting image collections, due to the lack of semantic meanings in Bag of Words (BoW) based representation. In this paper, we propose a novel Semantics Related Distributed Visual Search (SRDVS) model. We employ Latent Dirichlet Allocation (LDA) [2] to discover the latent concepts as an intermediate semantic representation over a large scale image database. We aim to learn an optimal image allocation for each server node and accurately perform collection selection for each query. Experimental results over a million scale image database have demonstrated encouraging performance over state-of-the-art approaches. On average 6% collections are selected, which however yields promising retrieval performance comparable to the exhaustive search over the whole database. Copyright © 2012 ACM.
UR - https://dl.acm.org/doi/10.1145/2382336.2382372
UR - http://www.scopus.com/inward/record.url?scp=84869055483&partnerID=8YFLogxK
U2 - 10.1145/2382336.2382372
DO - 10.1145/2382336.2382372
M3 - Conference contribution
SN - 9781450316002
SP - 127
EP - 131
BT - ACM International Conference Proceeding Series
ER -