Manifold Learning for Rank Aggregation

Shangsong Liang, Ilya Markov, Zhaochun Ren, Maarten de Rijke

Research output: Chapter in Book/Report/Conference proceedingConference contribution

15 Scopus citations


We address the task of fusing ranked lists of documents that are retrieved in response to a query. Past work on this task of rank aggregation often assumes that documents in the lists being fused are independent and that only the documents that are ranked high in many lists are likely to be relevant to a given topic. We propose manifold learning aggregation approaches, ManX and v-ManX, that build on the cluster hypothesis and exploit inter-document similarity information. ManX regularizes document fusion scores, so that documents that appear to be similar within a manifold, receive similar scores, whereas v-ManX first generates virtual adversarial documents and then regularizes the fusion scores of both original and virtual adversarial documents. Since aggregation methods built on the cluster hypothesis are computationally expensive, we adopt an optimization method that uses the top-k documents as anchors and considerably reduces the computational complexity of manifold-based methods, resulting in two efficient aggregation approaches, a-ManX and a-v-ManX. We assess the proposed approaches experimentally and show that they significantly outperform the state-of-the-art aggregation approaches, while a-ManX and a-v-ManX run faster than ManX, v-ManX, respectively.
Original languageEnglish (US)
Title of host publicationProceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18
PublisherACM Press
Number of pages10
ISBN (Print)9781450356398
StatePublished - 2018


Dive into the research topics of 'Manifold Learning for Rank Aggregation'. Together they form a unique fingerprint.

Cite this