TY - JOUR
T1 - Preserving differential privacy in convolutional deep belief networks
AU - Phan, Nhat Hai
AU - Wu, Xintao
AU - Dou, Dejing
N1 - Funding Information:
This work is supported by the NIH Grant R01GM103309 to the SMASH project. Wu is also supported by NSF Grant 1502273 and 1523115. Dou is also supported by NSF Grant 1118050. We thank Xiao Xiao and Rebeca Sacks for their contributions.
Publisher Copyright:
© 2017, The Author(s).
PY - 2017/10/1
Y1 - 2017/10/1
N2 - The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing ϵ-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.
AB - The remarkable development of deep learning in medicine and healthcare domain presents obvious privacy issues, when deep neural networks are built on users’ personal and highly sensitive data, e.g., clinical records, user profiles, biomedical images, etc. However, only a few scientific studies on preserving privacy in deep learning have been conducted. In this paper, we focus on developing a private convolutional deep belief network (pCDBN), which essentially is a convolutional deep belief network (CDBN) under differential privacy. Our main idea of enforcing ϵ-differential privacy is to leverage the functional mechanism to perturb the energy-based objective functions of traditional CDBNs, rather than their results. One key contribution of this work is that we propose the use of Chebyshev expansion to derive the approximate polynomial representation of objective functions. Our theoretical analysis shows that we can further derive the sensitivity and error bounds of the approximate polynomial representation. As a result, preserving differential privacy in CDBNs is feasible. We applied our model in a health social network, i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for human behavior prediction, human behavior classification, and handwriting digit recognition tasks. Theoretical analysis and rigorous experimental evaluations show that the pCDBN is highly effective. It significantly outperforms existing solutions.
KW - Deep learning
KW - Differential privacy
KW - Health informatics
KW - Human behavior prediction
KW - Image classification
UR - http://www.scopus.com/inward/record.url?scp=85023768532&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85023768532&partnerID=8YFLogxK
U2 - 10.1007/s10994-017-5656-2
DO - 10.1007/s10994-017-5656-2
M3 - Article
AN - SCOPUS:85023768532
SN - 0885-6125
VL - 106
SP - 1681
EP - 1704
JO - Machine Learning
JF - Machine Learning
IS - 9-10
ER -