Gabor Layers Enhance Network Robustness

Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Kassem Thabet, Bernard Ghanem, Pablo Arbeláez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

14 Scopus citations

Abstract

We revisit the benefits of merging classical vision concepts with deep learning models. In particular, we explore the effect of replacing the first layers of various deep architectures with Gabor layers (i.e. convolutional layers with filters that are based on learnable Gabor parameters) on robustness against adversarial attacks. We observe that architectures with Gabor layers gain a consistent boost in robustness over regular models and maintain high generalizing test performance. We then exploit the analytical expression of Gabor filters to derive a compact expression for a Lipschitz constant of such filters, and harness this theoretical result to develop a regularizer we use during training to further enhance network robustness. We conduct extensive experiments with various architectures (LeNet, AlexNet, VGG16, and WideResNet) on several datasets (MNIST, SVHN, CIFAR10 and CIFAR100) and demonstrate large empirical robustness gains. Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.
Original languageEnglish (US)
Title of host publicationComputer Vision – ECCV 2020
PublisherSpringer International Publishing
Pages450-466
Number of pages17
ISBN (Print)9783030585440
DOIs
StatePublished - Nov 5 2020

Fingerprint

Dive into the research topics of 'Gabor Layers Enhance Network Robustness'. Together they form a unique fingerprint.

Cite this