TY - GEN
T1 - AdvCat: Domain-Agnostic Robustness Assessment for Cybersecurity-Critical Applications with Categorical Inputs
AU - Orsini, Helene
AU - Bao, Hongyan
AU - Zhou, Yujun
AU - Xu, Xiangrui
AU - Han, Yufei
AU - Yi, Longyang
AU - Wang, Wei
AU - Gao, Xin
AU - Zhang, Xiangliang
N1 - KAUST Repository Item: Exported on 2023-01-31
Acknowledgements: The research reported in this paper was partially supported by funding from King Abdullah University of Science and Technology (KAUST).
PY - 2023/1/26
Y1 - 2023/1/26
N2 - Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cybersecurity-critical applications, such as detecting network intrusions and fake news campaigns. Despite effectiveness, their robustness against adversarial attacks is one of the key trust concerns for MLaaS deployment. We are thus motivated to assess the adversarial robustness of the Machine Learning models residing at the core of these securitycritical applications with categorical inputs. Previous research efforts on accessing model robustness against manipulation of categorical inputs are specific to use cases and heavily depend on domain knowledge, or require white-box access to the target ML model. Such limitations prevent the robustness assessment from being as a domain-agnostic service provided to various real-world applications. We propose a provably optimal yet computationally highly efficient adversarial robustness assessment protocol for a wide band of ML-driven cybersecurity-critical applications. We demonstrate the use of the domain-agnostic robustness assessment method with substantial experimental study on fake news detection and intrusion detection problems.
AB - Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cybersecurity-critical applications, such as detecting network intrusions and fake news campaigns. Despite effectiveness, their robustness against adversarial attacks is one of the key trust concerns for MLaaS deployment. We are thus motivated to assess the adversarial robustness of the Machine Learning models residing at the core of these securitycritical applications with categorical inputs. Previous research efforts on accessing model robustness against manipulation of categorical inputs are specific to use cases and heavily depend on domain knowledge, or require white-box access to the target ML model. Such limitations prevent the robustness assessment from being as a domain-agnostic service provided to various real-world applications. We propose a provably optimal yet computationally highly efficient adversarial robustness assessment protocol for a wide band of ML-driven cybersecurity-critical applications. We demonstrate the use of the domain-agnostic robustness assessment method with substantial experimental study on fake news detection and intrusion detection problems.
UR - http://hdl.handle.net/10754/687388
UR - https://ieeexplore.ieee.org/document/10021026/
U2 - 10.1109/bigdata55660.2022.10021026
DO - 10.1109/bigdata55660.2022.10021026
M3 - Conference contribution
BT - 2022 IEEE International Conference on Big Data (Big Data)
PB - IEEE
ER -