TY - JOUR
T1 - PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models - Federated Learning in Age of Foundation Model
AU - Guo, Tao
AU - Guo, Song
AU - Wang, Junxiao
AU - Tang, Xueyang
AU - Xu, Wenchao
N1 - KAUST Repository Item: Exported on 2023-09-06
PY - 2023/8/7
Y1 - 2023/8/7
N2 - Quick global aggregation of effective distributed parameters is crucial to federated learning (FL), which requires adequate bandwidth for parameters communication and sufficient user data for local training. Otherwise, FL may cost excessive training time for convergence and produce inaccurate models. In this paper, we propose a brand-new FL framework, P rompt FL, that replaces the federated model training with the federated prompt training, i.e., let federated participants train prompts instead of a shared model, to simultaneously achieve the efficient global aggregation and local training on insufficient data by exploiting the power of foundation models (FM) in a distributed way. PromptFL ships an off-the-shelf FM, i.e., CLIP, to distributed clients who would cooperatively train shared soft prompts based on very few local data. Since P rompt FL only needs to update the prompts instead of the whole model, both the local training and the global aggregation can be significantly accelerated. And FM trained over large scale data can provide strong adaptation capability to distributed users tasks with the trained soft prompts. We empirically analyze the P rompt FL via extensive experiments, and show its superiority in terms of system feasibility, user privacy, and performance.
AB - Quick global aggregation of effective distributed parameters is crucial to federated learning (FL), which requires adequate bandwidth for parameters communication and sufficient user data for local training. Otherwise, FL may cost excessive training time for convergence and produce inaccurate models. In this paper, we propose a brand-new FL framework, P rompt FL, that replaces the federated model training with the federated prompt training, i.e., let federated participants train prompts instead of a shared model, to simultaneously achieve the efficient global aggregation and local training on insufficient data by exploiting the power of foundation models (FM) in a distributed way. PromptFL ships an off-the-shelf FM, i.e., CLIP, to distributed clients who would cooperatively train shared soft prompts based on very few local data. Since P rompt FL only needs to update the prompts instead of the whole model, both the local training and the global aggregation can be significantly accelerated. And FM trained over large scale data can provide strong adaptation capability to distributed users tasks with the trained soft prompts. We empirically analyze the P rompt FL via extensive experiments, and show its superiority in terms of system feasibility, user privacy, and performance.
UR - http://hdl.handle.net/10754/694131
UR - https://ieeexplore.ieee.org/document/10210127/
UR - http://www.scopus.com/inward/record.url?scp=85167819741&partnerID=8YFLogxK
U2 - 10.1109/tmc.2023.3302410
DO - 10.1109/tmc.2023.3302410
M3 - Article
SN - 1536-1233
SP - 1
EP - 15
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
ER -