Evasion attacks with adversarial deep learning against power system state estimation

Ali Sayghe, Junbo Zhao, Charalambos Konstantinou

Research output: Chapter in Book/Report/Conference proceedingConference contribution

37 Scopus citations

Abstract

Cyberattacks against critical infrastructures, including power systems, are increasing rapidly. False Data Injection Attacks (FDIAs) are among the attacks that have been demonstrated to be effective and have been getting more attention over the last years. FDIAs can manipulate measurements to perturb the results of power system state estimation without being detected, leading to potentially severe outages. In order to protect against FDIAs, several machine learning algorithms have been proposed in the literature. However, such methods are susceptible to adversarial examples which could significantly reduce their detection accuracy. In this paper, we examine the effects of adversarial examples on FDIAs detection using deep learning algorithms. Specifically, the impacts on Multilayer Perceptron (MLP) against two different adversarial attacks are investigated, namely the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Jacobian-based Saliency Map Attack (JSMA). Numerical results tested on the IEEE 14-bus system using load data collected from the New York Independent System Operator (NYISO) demonstrate the effectiveness of the proposed methods.
Original languageEnglish (US)
Title of host publicationIEEE Power and Energy Society General Meeting
PublisherIEEE Computer Society
ISBN (Print)9781728155081
DOIs
StatePublished - Aug 2 2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Evasion attacks with adversarial deep learning against power system state estimation'. Together they form a unique fingerprint.

Cite this