From n to n+1: Multiclass transfer incremental learning

Ilja Kuzborskij, Francesco Orabona, Barbara Caputo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

110 Scopus citations


Since the seminal work of Thrun [16], the learning to learn paradigm has been defined as the ability of an agent to improve its performance at each task with experience, with the number of tasks. Within the object categorization domain, the visual learning community has actively declined this paradigm in the transfer learning setting. Almost all proposed methods focus on category detection problems, addressing how to learn a new target class from few samples by leveraging over the known source. But if one thinks of learning over multiple tasks, there is a need for multiclass transfer learning algorithms able to exploit previous source knowledge when learning a new class, while at the same time optimizing their overall performance. This is an open challenge for existing transfer learning algorithms. The contribution of this paper is a discriminative method that addresses this issue, based on a Least-Squares Support Vector Machine formulation. Our approach is designed to balance between transferring to the new class and preserving what has already been learned on the source models. Extensive experiments on subsets of publicly available datasets prove the effectiveness of our approach. © 2013 IEEE.
Original languageEnglish (US)
Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Number of pages8
StatePublished - Nov 15 2013
Externally publishedYes


Dive into the research topics of 'From n to n+1: Multiclass transfer incremental learning'. Together they form a unique fingerprint.

Cite this