TY - GEN

T1 - Information Leakage in Zero-Error Source Coding

T2 - 2021 IEEE International Symposium on Information Theory, ISIT 2021

AU - Liu, Yucheng

AU - Ong, Lawrence

AU - Johnson, Sarah

AU - Kliewer, Joerg

AU - Sadeghi, Parastoo

AU - Yeoh, Phee Lep

N1 - Publisher Copyright:
© 2021 IEEE.

PY - 2021/7/12

Y1 - 2021/7/12

N2 - We study the information leakage to a guessing adversary in zero-error source coding. The source coding problem is defined by a confusion graph capturing the distinguishability between source symbols. The information leakage is measured by the ratio of the adversary's successful guessing probability after and before eavesdropping the codeword, maximized over all possible source distributions. Such measurement under the basic adversarial model where the adversary makes a single guess and the guess is regarded successful if and only if the estimator sequence equals to the true source sequence is known as the maximum min-entropy leakage or the maximal leakage in the literature. We develop a single-letter characterization of the optimal normalized leakage under the basic adversarial model, together with an optimum-achieving memoryless stochastic mapping scheme. An interesting observation is that the optimal normalized leakage is equal to the optimal compression rate with fixed-length source codes, both of which can be simultaneously achieved by some deterministic coding schemes. We then extend the leakage measurement to generalized adversarial models where the adversary makes multiple guesses and allows a certain level of distortion, for which we derive single-letter lower and upper bounds.

AB - We study the information leakage to a guessing adversary in zero-error source coding. The source coding problem is defined by a confusion graph capturing the distinguishability between source symbols. The information leakage is measured by the ratio of the adversary's successful guessing probability after and before eavesdropping the codeword, maximized over all possible source distributions. Such measurement under the basic adversarial model where the adversary makes a single guess and the guess is regarded successful if and only if the estimator sequence equals to the true source sequence is known as the maximum min-entropy leakage or the maximal leakage in the literature. We develop a single-letter characterization of the optimal normalized leakage under the basic adversarial model, together with an optimum-achieving memoryless stochastic mapping scheme. An interesting observation is that the optimal normalized leakage is equal to the optimal compression rate with fixed-length source codes, both of which can be simultaneously achieved by some deterministic coding schemes. We then extend the leakage measurement to generalized adversarial models where the adversary makes multiple guesses and allows a certain level of distortion, for which we derive single-letter lower and upper bounds.

UR - http://www.scopus.com/inward/record.url?scp=85115086740&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85115086740&partnerID=8YFLogxK

U2 - 10.1109/ISIT45174.2021.9517778

DO - 10.1109/ISIT45174.2021.9517778

M3 - Conference contribution

AN - SCOPUS:85115086740

T3 - IEEE International Symposium on Information Theory - Proceedings

SP - 2590

EP - 2595

BT - 2021 IEEE International Symposium on Information Theory, ISIT 2021 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

Y2 - 12 July 2021 through 20 July 2021

ER -