Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning

Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, Xiangliang Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

39 Scopus citations

Abstract

Are Federated Learning (FL) systems free from backdoor poisoning with the arsenal of various defense strategies deployed? This is an intriguing problem with significant practical implications regarding the utility of FL services. Despite the recent flourish of poisoning-resilient FL methods, our study shows that carefully tuning the collusion between malicious participants can minimize the trigger-induced bias of the poisoned local model from the poison-free one, which plays the key role in delivering stealthy backdoor attacks and circumventing a wide spectrum of state-of-the-art defense methods in FL. In our work, we instantiate the attack strategy by proposing a distributed backdoor attack method, namely Cerberus Poisoning (CerP). It jointly tunes the backdoor trigger and controls the poisoned model changes on each malicious participant to achieve a stealthy yet successful backdoor attack against a wide spectrum of defensive mechanisms of federated learning techniques. Our extensive study on 3 large-scale benchmark datasets and 13 mainstream defensive mechanisms confirms that Cerberus Poisoning raises a significantly severe threat to the integrity and security of federated learning practices, regardless of the flourish of robust Federated Learning methods.
Original languageEnglish (US)
Title of host publicationProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
PublisherAAAI press
Pages9020-9028
Number of pages9
ISBN (Print)9781577358800
StatePublished - Jun 27 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning'. Together they form a unique fingerprint.

Cite this