TY - GEN
T1 - Evasion attacks with adversarial deep learning against power system state estimation
AU - Sayghe, Ali
AU - Zhao, Junbo
AU - Konstantinou, Charalambos
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-13
PY - 2020/8/2
Y1 - 2020/8/2
N2 - Cyberattacks against critical infrastructures, including power systems, are increasing rapidly. False Data Injection Attacks (FDIAs) are among the attacks that have been demonstrated to be effective and have been getting more attention over the last years. FDIAs can manipulate measurements to perturb the results of power system state estimation without being detected, leading to potentially severe outages. In order to protect against FDIAs, several machine learning algorithms have been proposed in the literature. However, such methods are susceptible to adversarial examples which could significantly reduce their detection accuracy. In this paper, we examine the effects of adversarial examples on FDIAs detection using deep learning algorithms. Specifically, the impacts on Multilayer Perceptron (MLP) against two different adversarial attacks are investigated, namely the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Jacobian-based Saliency Map Attack (JSMA). Numerical results tested on the IEEE 14-bus system using load data collected from the New York Independent System Operator (NYISO) demonstrate the effectiveness of the proposed methods.
AB - Cyberattacks against critical infrastructures, including power systems, are increasing rapidly. False Data Injection Attacks (FDIAs) are among the attacks that have been demonstrated to be effective and have been getting more attention over the last years. FDIAs can manipulate measurements to perturb the results of power system state estimation without being detected, leading to potentially severe outages. In order to protect against FDIAs, several machine learning algorithms have been proposed in the literature. However, such methods are susceptible to adversarial examples which could significantly reduce their detection accuracy. In this paper, we examine the effects of adversarial examples on FDIAs detection using deep learning algorithms. Specifically, the impacts on Multilayer Perceptron (MLP) against two different adversarial attacks are investigated, namely the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Jacobian-based Saliency Map Attack (JSMA). Numerical results tested on the IEEE 14-bus system using load data collected from the New York Independent System Operator (NYISO) demonstrate the effectiveness of the proposed methods.
UR - https://ieeexplore.ieee.org/document/9281719/
UR - http://www.scopus.com/inward/record.url?scp=85095862704&partnerID=8YFLogxK
U2 - 10.1109/PESGM41954.2020.9281719
DO - 10.1109/PESGM41954.2020.9281719
M3 - Conference contribution
SN - 9781728155081
BT - IEEE Power and Energy Society General Meeting
PB - IEEE Computer Society
ER -