TY - GEN
T1 - Creativity Inspired Zero-Shot Learning
AU - Elhoseiny, Mohamed
AU - Elfeki, Mohamed
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2019
Y1 - 2019
N2 - Zero-shot learning (ZSL) aims at understanding unseen categories with no training examples from class-level descriptions. To improve the discriminative power of zero-shot learning, we model the visual learning process of unseen categories with an inspiration from the psychology of human creativity for producing novel art. We relate ZSL to human creativity by observing that zero-shot learning is about recognizing the unseen and creativity is about creating a likable unseen. We introduce a learning signal inspired by creativity literature that explores the unseen space with hallucinated class-descriptions and encourages careful deviation of their visual feature generations from seen classes while allowing knowledge transfer from seen to unseen classes. Empirically, we show consistent improvement over the state of the art of several percents on the largest available benchmarks on the challenging task or generalized ZSL from a noisy text that we focus on, using the CUB and NABirds datasets. We also show the advantage of our loss on Attribute-based ZSL on three additional datasets (AwA2, aPY, and SUN). Code is available at https://github.com/mhelhoseiny/CIZSL.
AB - Zero-shot learning (ZSL) aims at understanding unseen categories with no training examples from class-level descriptions. To improve the discriminative power of zero-shot learning, we model the visual learning process of unseen categories with an inspiration from the psychology of human creativity for producing novel art. We relate ZSL to human creativity by observing that zero-shot learning is about recognizing the unseen and creativity is about creating a likable unseen. We introduce a learning signal inspired by creativity literature that explores the unseen space with hallucinated class-descriptions and encourages careful deviation of their visual feature generations from seen classes while allowing knowledge transfer from seen to unseen classes. Empirically, we show consistent improvement over the state of the art of several percents on the largest available benchmarks on the challenging task or generalized ZSL from a noisy text that we focus on, using the CUB and NABirds datasets. We also show the advantage of our loss on Attribute-based ZSL on three additional datasets (AwA2, aPY, and SUN). Code is available at https://github.com/mhelhoseiny/CIZSL.
UR - http://hdl.handle.net/10754/661916
UR - http://openaccess.thecvf.com/content_ICCV_2019/html/Elhoseiny_Creativity_Inspired_Zero-Shot_Learning_ICCV_2019_paper.html
UR - http://www.scopus.com/inward/record.url?scp=85081907737&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00588
DO - 10.1109/ICCV.2019.00588
M3 - Conference contribution
SN - 9781728148038
SP - 5783
EP - 5792
BT - 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
PB - IEEE
ER -