TY - GEN
T1 - Heterogeneous Gaussian mechanism
T2 - 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
AU - Phan, Nhat Hai
AU - Vu, Minh N.
AU - Liu, Yang
AU - Jin, Ruoming
AU - Dou, Dejing
AU - Wu, Xintao
AU - Thai, My T.
N1 - Publisher Copyright:
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2019
Y1 - 2019
N2 - In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.
AB - In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.
UR - http://www.scopus.com/inward/record.url?scp=85074919621&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074919621&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2019/660
DO - 10.24963/ijcai.2019/660
M3 - Conference contribution
AN - SCOPUS:85074919621
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 4753
EP - 4759
BT - Proceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
A2 - Kraus, Sarit
PB - International Joint Conferences on Artificial Intelligence
Y2 - 10 August 2019 through 16 August 2019
ER -