TY - GEN
T1 - Affinitention nets
T2 - 2021 ACM Conference on Health, Inference, and Learning, CHIL 2021
AU - Dov, David
AU - Assaad, Serge
AU - Si, Shijing
AU - Wang, Rui
AU - Xu, Hongteng
AU - Kovalsky, Shahar Ziv
AU - Bell, Jonathan
AU - Range, Danielle Elliott
AU - Cohen, Jonathan
AU - Henao, Ricardo
AU - Carin, Lawrence
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/4/8
Y1 - 2021/4/8
N2 - Set classification is the task of predicting a single label from a set comprising multiple instances. The examples we consider are pathology slides represented by sets of patches and medical text data represented by sets of word embeddings. State-of-the-art methods, such as the transformer network, typically use attention mechanisms to learn representations of set data, by modeling interactions between instances of the set. These methods, however, have complex heuristic architectures comprising multiple heads and layers. The complexity of attention architectures hampers their training when only a small number of labeled sets is available, as is often the case in medical applications. To address this problem, we present a kernel-based representation learning framework that links learning affinity kernels to learning representations from attention architectures. We show that learning a combination of the sum and the product of kernels is equivalent to learning representations from multi-head multi-layer attention architectures. From our framework, we devise a simplified attention architecture which we term affinitention (affinity-attention) nets. We demonstrate the application of affinitention nets to the classification of the Set-Cifar10 dataset, thyroid malignancy prediction from pathology slides, as well as patient text-message triage. We show that affinitention nets provide competitive results compared to heuristic attention architectures and outperform other competing methods.
AB - Set classification is the task of predicting a single label from a set comprising multiple instances. The examples we consider are pathology slides represented by sets of patches and medical text data represented by sets of word embeddings. State-of-the-art methods, such as the transformer network, typically use attention mechanisms to learn representations of set data, by modeling interactions between instances of the set. These methods, however, have complex heuristic architectures comprising multiple heads and layers. The complexity of attention architectures hampers their training when only a small number of labeled sets is available, as is often the case in medical applications. To address this problem, we present a kernel-based representation learning framework that links learning affinity kernels to learning representations from attention architectures. We show that learning a combination of the sum and the product of kernels is equivalent to learning representations from multi-head multi-layer attention architectures. From our framework, we devise a simplified attention architecture which we term affinitention (affinity-attention) nets. We demonstrate the application of affinitention nets to the classification of the Set-Cifar10 dataset, thyroid malignancy prediction from pathology slides, as well as patient text-message triage. We show that affinitention nets provide competitive results compared to heuristic attention architectures and outperform other competing methods.
KW - attention
KW - medical text
KW - multiple instance learning
KW - set classification
KW - transformer
KW - whole slide images
UR - http://www.scopus.com/inward/record.url?scp=85104094953&partnerID=8YFLogxK
U2 - 10.1145/3450439.3451856
DO - 10.1145/3450439.3451856
M3 - Conference contribution
AN - SCOPUS:85104094953
T3 - ACM CHIL 2021 - Proceedings of the 2021 ACM Conference on Health, Inference, and Learning
SP - 14
EP - 24
BT - ACM CHIL 2021 - Proceedings of the 2021 ACM Conference on Health, Inference, and Learning
PB - Association for Computing Machinery, Inc
Y2 - 8 April 2021 through 9 April 2021
ER -