Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering

Ibrahim Alabdulmohsin, Xin Gao, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Scopus citations

Abstract

Many classification algorithms have been successfully deployed in security-sensitive applications including spam filters and intrusion detection systems. Under such adversarial environments, adversaries can generate exploratory attacks against the defender such as evasion and reverse engineering. In this paper, we discuss why reverse engineering attacks can be carried out quite efficiently against fixed classifiers, and investigate the use of randomization as a suitable strategy for mitigating their risk. In particular, we derive a semidefinite programming (SDP) formulation for learning a distribution of classifiers subject to the constraint that any single classifier picked at random from such distribution provides reliable predictions with a high probability. We analyze the tradeoff between variance of the distribution and its predictive accuracy, and establish that one can almost always incorporate randomization with large variance without incurring a loss in accuracy. In other words, the conventional approach of using a fixed classifier in adversarial environments is generally Pareto suboptimal. Finally, we validate such conclusions on both synthetic and real-world classification problems. Copyright 2014 ACM.
Original languageEnglish (US)
Title of host publicationProceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14
PublisherAssociation for Computing Machinery (ACM)
Pages231-240
Number of pages10
ISBN (Print)9781450325981
DOIs
StatePublished - 2014

Fingerprint

Dive into the research topics of 'Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering'. Together they form a unique fingerprint.

Cite this