Browse by author
Lookup NU author(s): Xu Ma, Dr Changyu Dong
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
IEEE Federated learning is a collaborative machine learning framework where a global model is trained by different organizations under the privacy restrictions. Promising as it is, privacy and robustness issues emerge when an adversary attempts to infer the private information from the exchanged parameters or compromise the global model. Various protocols have been proposed to counter the security risks, however, it becomes challenging when one wants to make federated learning protocols robust against Byzantine adversaries while preserving the privacy of the individual participant. In this paper, we propose a differentially private Byzantine-robust federated learning scheme (DPBFL) with high computation and communication efficiency. The proposed scheme is effective in preventing adversarial attacks launched by the Byzantine participants and achieves differential privacy through a novel aggregation protocol in the shuffle model. The theoretical analysis indicates that the proposed scheme converges to the approximate optimal solution with the learning error dependent on the differential privacy budget and the number of Byzantine participants. Experimental results on MNIST, FashionMNIST and CIFAR10 demonstrate that the proposed scheme is effective and efficient.
Author(s): Ma X, Sun X, Wu Y, Liu Z, Chen X, Dong C
Publication type: Article
Publication status: Published
Journal: IEEE Transactions on Parallel and Distributed Systems
Year: 2022
Volume: 33
Issue: 12
Pages: 3690-3701
Print publication date: 01/12/2022
Online publication date: 14/04/2022
Acceptance date: 02/04/2018
ISSN (print): 1045-9219
ISSN (electronic): 1558-2183
Publisher: Institute of Electrical and Electronics Engineers
URL: https://doi.org/10.1109/TPDS.2022.3167434
DOI: 10.1109/TPDS.2022.3167434
Altmetrics provided by Altmetric