TY - GEN
T1 - Gabor Layers Enhance Network Robustness
AU - Pérez, Juan C.
AU - Alfarra, Motasem
AU - Jeanneret, Guillaume
AU - Bibi, Adel
AU - Thabet, Ali Kassem
AU - Ghanem, Bernard
AU - Arbeláez, Pablo
N1 - KAUST Repository Item: Exported on 2020-12-17
Acknowledged KAUST grant number(s): OSR-CRG2019-4033
Acknowledgements: This work was partially supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2019-4033.
PY - 2020/11/5
Y1 - 2020/11/5
N2 - We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect of replacing the first layers of various deep architectures with Gabor layers (i.e. convolutional layers with filters that are based on learnable Gabor parameters) on robustness against adversarial attacks. We observe that architectures with Gabor layers gain a consistent boost in robustness over regular models and maintain high generalizing test performance. We then exploit the analytical expression of Gabor filters to derive a compact expression for a Lipschitz constant of such filters, and harness this theoretical result to develop a regularizer we use during training to further enhance network robustness. We conduct extensive experiments with various architectures (LeNet, AlexNet, VGG16, and WideResNet) on several datasets (MNIST, SVHN, CIFAR10 and CIFAR100) and demonstrate large empirical robustness gains. Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.
AB - We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect of replacing the first layers of various deep architectures with Gabor layers (i.e. convolutional layers with filters that are based on learnable Gabor parameters) on robustness against adversarial attacks. We observe that architectures with Gabor layers gain a consistent boost in robustness over regular models and maintain high generalizing test performance. We then exploit the analytical expression of Gabor filters to derive a compact expression for a Lipschitz constant of such filters, and harness this theoretical result to develop a regularizer we use during training to further enhance network robustness. We conduct extensive experiments with various architectures (LeNet, AlexNet, VGG16, and WideResNet) on several datasets (MNIST, SVHN, CIFAR10 and CIFAR100) and demonstrate large empirical robustness gains. Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.
UR - http://hdl.handle.net/10754/666421
UR - http://link.springer.com/10.1007/978-3-030-58545-7_26
UR - http://www.scopus.com/inward/record.url?scp=85097093633&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58545-7_26
DO - 10.1007/978-3-030-58545-7_26
M3 - Conference contribution
SN - 9783030585440
SP - 450
EP - 466
BT - Computer Vision – ECCV 2020
PB - Springer International Publishing
ER -