TY - GEN
T1 - Asymptotic Behavior of Adversarial Training in Binary Linear Classification
AU - Taheri, Hossein
AU - Pedarsani, Ramtin
AU - Thrampoulidis, Christos
N1 - KAUST Repository Item: Exported on 2022-09-14
Acknowledged KAUST grant number(s): GR8
Acknowledgements: The authors acknowledge support by NSF grants 1909320, 2003035, 193464, 2009030 and a GR8 award from KAUST.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2022/8/3
Y1 - 2022/8/3
N2 - Adversarial training using empirical risk minimization is the state-of-the-art method for defense against adversarial attacks, that is against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding generalization properties of adversarial training in classification remains widely open. In this paper, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under ℓ∞-norm bounded perturbations in a generative Gaussian-mixture model. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the overparameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study fundamental limits of adversarial training.
AB - Adversarial training using empirical risk minimization is the state-of-the-art method for defense against adversarial attacks, that is against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding generalization properties of adversarial training in classification remains widely open. In this paper, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under ℓ∞-norm bounded perturbations in a generative Gaussian-mixture model. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the overparameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study fundamental limits of adversarial training.
UR - http://hdl.handle.net/10754/680945
UR - https://ieeexplore.ieee.org/document/9834717/
UR - http://www.scopus.com/inward/record.url?scp=85136310525&partnerID=8YFLogxK
U2 - 10.1109/ISIT50566.2022.9834717
DO - 10.1109/ISIT50566.2022.9834717
M3 - Conference contribution
SN - 9781665421591
SP - 127
EP - 132
BT - 2022 IEEE International Symposium on Information Theory (ISIT)
PB - IEEE
ER -