Abstract
We consider the problem of multitask learning (MTL), in which we simultaneously learn classifiers for multiple data sets (tasks), with sharing of intertask data as appropriate. We introduce a set of relevance parameters that control the degree to which data from other tasks are used in estimating the current task's classifier parameters. The set of relevance parameters are learned by maximizing their posterior probability, yielding an expectation-maximization (EM) algorithm. We illustrate the effectiveness of our approach through experimental results on a practical data set. © 2008 IEEE.
Original language | English (US) |
---|---|
Pages (from-to) | 593-596 |
Number of pages | 4 |
Journal | IEEE Signal Processing Letters |
Volume | 15 |
DOIs | |
State | Published - Dec 1 2008 |
Externally published | Yes |