TY - JOUR
T1 - Robust Federated Learning With Noisy Labeled Data Through Loss Function Correction
AU - Chen, Li
AU - Ang, Fan
AU - Chen, Yunfei
AU - Wang, Weidong
N1 - KAUST Repository Item: Exported on 2023-05-24
Acknowledged KAUST grant number(s): ORA-2021-CRG10-4696
Acknowledgements: This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFA0701603, in part by the National Natural Science Foundation of China under Grant 62071445, and in part by King Abdullah University of Science and Technology Research Funding (KRF) under Award ORA-2021-CRG10-4696.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2022/12/7
Y1 - 2022/12/7
N2 - Federated learning (FL) is a communication-efficient machine learning paradigm to leverage distributed data at the network edge. Nevertheless, FL usually fails to train a high-quality model from the networks, where the edge nodes collect noisy labeled data. To tackle this challenge, this paper focuses on developing an innovative robust FL. We consider two kinds of networks with different data distribution. Firstly, we design a reweighted FL under a full-data network, where all edge nodes are equipped with both numerous noisy labeled dataset and small clean dataset. The key idea is that edge devices learn to assign the local weights of loss functions in noisy labeled dataset, and cooperate with central server to update global weights. Secondly, we consider a part-data network where some edge nodes exclude clean dataset, and can not compute the weights locally. The broadcasting of the global weights is added to help those edge nodes without clean dataset to reweight their noisy loss functions. Both designs have a convergence rate of O(1/T2). Simulation results illustrate that the both proposed training processes improve the prediction accuracy due to the proper weights assignments of noisy loss function.
AB - Federated learning (FL) is a communication-efficient machine learning paradigm to leverage distributed data at the network edge. Nevertheless, FL usually fails to train a high-quality model from the networks, where the edge nodes collect noisy labeled data. To tackle this challenge, this paper focuses on developing an innovative robust FL. We consider two kinds of networks with different data distribution. Firstly, we design a reweighted FL under a full-data network, where all edge nodes are equipped with both numerous noisy labeled dataset and small clean dataset. The key idea is that edge devices learn to assign the local weights of loss functions in noisy labeled dataset, and cooperate with central server to update global weights. Secondly, we consider a part-data network where some edge nodes exclude clean dataset, and can not compute the weights locally. The broadcasting of the global weights is added to help those edge nodes without clean dataset to reweight their noisy loss functions. Both designs have a convergence rate of O(1/T2). Simulation results illustrate that the both proposed training processes improve the prediction accuracy due to the proper weights assignments of noisy loss function.
UR - http://hdl.handle.net/10754/691979
UR - https://ieeexplore.ieee.org/document/9973351/
UR - http://www.scopus.com/inward/record.url?scp=85144783924&partnerID=8YFLogxK
U2 - 10.1109/TNSE.2022.3227287
DO - 10.1109/TNSE.2022.3227287
M3 - Article
SN - 2327-4697
VL - 10
SP - 1501
EP - 1511
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
IS - 3
ER -