TY - JOUR
T1 - Robust Online Multi-Task Learning with Correlative and Personalized Structures
AU - Yang, Peng
AU - Zhao, Peilin
AU - Gao, Xin
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2017/6/29
Y1 - 2017/6/29
N2 - Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.
AB - Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.
UR - http://hdl.handle.net/10754/625169
UR - http://ieeexplore.ieee.org/document/7959634/
UR - http://www.scopus.com/inward/record.url?scp=85023760534&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2017.2703106
DO - 10.1109/TKDE.2017.2703106
M3 - Article
SN - 1041-4347
VL - 29
SP - 2510
EP - 2521
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 11
ER -