Browse by author
Lookup NU author(s): Dr Changyu Dong
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
IEEE. The success of machine learning (ML) depends on the availability of large-scale datasets. However, recent studies have shown that models trained on such datasets are vulnerable to privacy attacks, among which membership inference attack (MIA) brings serious privacy risk. MIA allows an adversary to infer whether a sample belongs to the training dataset of the target model or not. Though a variety of defenses against MIA have been proposed such as differential privacy and adversarial regularization, they also result in lower model accuracy and thus make the models less unusable. In this paper, aiming at maintaining the accuracy while protecting the privacy against MIA, we propose a new defense against membership inference attacks by generative adversarial network (GAN). Specifically, sensitive data is used to train a GAN, then the GAN generate the data for training the actual model. To ensure that the model trained with GAN on small datasets can has high utility, two different GAN structures with special training techniques are utilized to deal with the image data and table data, respectively. Experiment results show that the defense is more effective on different data sets against the existing attack schemes, and is more efficient compared with most advanced MIA defenses.
Author(s): Hu L, Li J, Lin G, Peng S, Zhang Z, Zhang Y, Dong C
Publication type: Article
Publication status: Published
Journal: IEEE Transactions on Dependable and Secure Computing
Year: 2023
Volume: 20
Issue: 3
Pages: 2144-2157
Print publication date: 01/05/2023
Online publication date: 12/05/2022
Acceptance date: 06/05/2022
ISSN (print): 1545-5971
ISSN (electronic): 1941-0018
Publisher: IEEE
URL: https://doi.org/10.1109/TDSC.2022.3174569
DOI: 10.1109/TDSC.2022.3174569
Altmetrics provided by Altmetric