Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization

Jim Jing-Yan Wang, Xin Gao

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

Traditional cross-domain learning methods transfer learning from a source domain to a target domain. In this paper, we propose the multiple-domain learning problem for several equally treated domains. The multiple-domain learning problem assumes that samples from different domains have different distributions, but share the same feature and class label spaces. Each domain could be a target domain, while also be a source domain for other domains. A novel multiple-domain representation method is proposed for the multiple-domain learning problem. This method is based on nonnegative matrix factorization (NMF), and tries to learn a basis matrix and coding vectors for samples, so that the domain distribution mismatch among different domains will be reduced under an extended variation of the maximum mean discrepancy (MMD) criterion. The novel algorithm - multiple-domain NMF (MDNMF) - was evaluated on two challenging multiple-domain learning problems - multiple user spam email detection and multiple-domain glioma diagnosis. The effectiveness of the proposed algorithm is experimentally verified. © 2013 Elsevier Ltd. All rights reserved.
Original languageEnglish (US)
Pages (from-to)181-189
Number of pages9
JournalEngineering Applications of Artificial Intelligence
Volume28
DOIs
StatePublished - Feb 2014

ASJC Scopus subject areas

  • Artificial Intelligence
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Beyond cross-domain learning: Multiple-domain nonnegative matrix factorization'. Together they form a unique fingerprint.

Cite this