# Calibrating the Attack to Sensitivity in Differentially Private Mechanisms

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Related Work and Methodology

#### 1.2. Contributions and Outline

- We consider a new attacker model whereby the adversary takes advantage of the underlying differentially private mechanism in order to remain undetected.
- We derive a trade-off between the privacy protected adversary’s advantage and the security of the system for the adversary to remain undetected while giving as much damage as possible to the system or, alternatively, for the defender to preserve the privacy of the system and detect the attacker. This trade-off is defined in the framework of statistical hypothesis testing similarly to [8].
- We adopt the Kullback–Leibler DP definition of [5] to the addressed problem for adversarial classification in differentially private mechanisms and present numerical comparisons of different cases where the sensitivity of the system is less and greater than the bias induced by the adversary on the published information.
- We apply a source-coding approach to anomaly detection under differential privacy to bound the variance of the additional data by the sensitivity of the mechanism and the original data’s statistics by deriving the mutual information between the neighboring datasets.
- We introduce a new DP metric, that is called Chernoff DP, as a stronger alternative to the well-known $(\u03f5,\delta )$-DP and KL-DP for the Gaussian mechanism. Chernoff DP is also adapted for adversarial classification and numerically shown to outperform KL-DP.

## 2. System Model and Its Components

**Definition**

**1**

**Definition**

**2**

**Definition**

**3**

**Definition**

**4.**

**Definition**

**5.**

**Definition**

**6.**

**Theorem**

**2**

#### 2.1. Problem Definition

#### 2.1.1. First-Order Statistics of ${X}_{a}$

#### 2.1.2. Second-Order Statistics of ${X}_{a}$-Information-Theoretic Approach

## 3. Adversarial Classification in Laplace Mechanisms

- One-Sided Test

**Theorem**

**3.**

**Remark**

**1.**

**Proof.**

- Derivation of $\alpha $:

- How to determine $\kappa $?:

- Derivation of the power of the test:

**Remark**

**2.**

#### 3.1. Two-Sided Test

**Theorem**

**4.**

**Proof.**

#### 3.2. A Trade-off between ${\mu}_{1}$, s and $\u03f5$ for Detecting the Attacker-Two-Sided Test

**Corollary**

**1.**

**Proof.**

## 4. Adversarial Classification in Gaussian Mechanisms

#### 4.1. Privacy-Distortion Trade-off for Second-Order Statistics

**Theorem**

**5.**

**Proof.**

**Corollary**

**2.**

**Proof.**

**Remark**

**3.**

**rate-distortion function of the Gaussian source**which, originally, provides the minimum possible transmission rate for a given distortion balancing (mostly for the Gaussian case) the squared-error distortion with the source variance. This is in line with the adversary’s goal in our setting, where the adversary aims to maximize the damage that s/he inflicts on the DP-mechanism. However, at the same time, to avoid being detected the attack is calibrated according to the sensitivity which here replaces the distortion. Thus, similar to the classical rate-distortion theory, here the mutual information between the neighbors is minimized for a given sensitivity to simultaneously satisfy adversary’s conflicting goals for the problem of adversarial classification under Gaussian DP-mechanism.

#### 4.2. A Statistical Threshold-First-Order Statistics

**Theorem**

**6.**

**Proof.**

## 5. Kullback–Leibler DP and Chernoff DP for Adversarial Classification

**Definition**

**7**

**.**For a randomized mechanism ${P}_{Y|X}$ that guarantees ϵ-KL-DP, if the following inequality holds for all its neighboring datasets x and $\tilde{x}$.

#### 5.1. Laplace Mechanisms

**Remark**

**4.**

#### 5.2. Chernoff DP for Gaussian Mechanism

**Definition**

**8**(Chernoff DP)

**.**

**stronger privacy metric**than KL-DP, and thus $(\u03f5,\delta )$-DP for the Gaussian mechanism due to prior probabilities. Such a comparison is presented numerically in Section 6. For the special case of equal standard deviation of both distributions, Chernoff information $C({f}_{0}||{f}_{1})$ is exactly $a\xb7b\xb7{D}_{KL}({f}_{0}\left|\right|{f}_{1})$.

## 6. Numerical Evaluation Results

#### 6.1. ROC Curves for Laplace Mechanism

#### 6.2. KL-DP for Adversarial Classification

#### 6.3. Numerical Evaluation Results for the Gaussian Mechanism

#### 6.3.1. Privacy-Distortion Trade-Off

#### 6.3.2. KL-DP vs. Chernoff DP

## 7. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Dwork, C. Differential Privacy. In Proceedings of the Automata, Languages and Programming, Venice, Italy, 10–14 July 2006; pp. 1–12. [Google Scholar]
- Dwork, C. Differential Privacy: A Survey of Results. In Proceedings of the International Conference on Theory and Applications of Models of Computation TAMC 2008, LNCS 4978, Xi’an, China, 25–29 April 2008; pp. 1–19. [Google Scholar]
- Dwork, C.; Smith, A. Differential Privacy for Statistics: What we Know and What we Want to Learn. J. Priv. Confid.
**2010**, 1, 135–154. [Google Scholar] [CrossRef][Green Version] - Dwork, C.; Roth, A. The Algorithmic Foundations of Differential Privacy. Found. Trends Theor. Comput. Sci.
**2014**, 9, 211–407. [Google Scholar] [CrossRef] - Cuff, P.; Yu, L. Differential Privacy as a Mutual Information Constraint. In Proceedings of the CCS 2016—23rd ACM Conference on Computer and Communication Security, Vienna, Austria, 24–28 October 2016. [Google Scholar]
- Giraldo, J.; Cardenas, A.A.; Kantarcioglu, M.; Katz, J. Adversarial Classification Under Differential Privacy. In Proceedings of the NDSS 2020, Network and Distributed Systems Security Symposium, San Diego, CA, USA, 23–26 February 2020. [Google Scholar]
- Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis. In Proceedings of the Theory of Cryptography Conference, New York, NY, USA, 4–7 March 2006; pp. 265–284. [Google Scholar]
- Liu, C.; He, X.; Chanyaswad, T.; Wang, S.; Mittal, P. Investigating Statistical Privacy Frameworks from the Perspective of Hypothesis Testing. In Proceedings of the PETS 2019—Privacy Enhancing Technologies, Stockholm, Sweden, 16–20 July 2019; pp. 233–254. [Google Scholar]
- Sheffet, O. Locally Private Hypothesis Testing. In Proceedings of the Machine Learning Research, Beijing, China, 14–16 November 2018. [Google Scholar]
- Wang, W.; Ying, L.; Zhang, J. On the Relation Between Identifiability, Differential Privacy and Mutual Information Privacy. IEEE Trans. Inf. Theory
**2016**, 62, 5018–5029. [Google Scholar] [CrossRef][Green Version] - Sarwate, A.; Sankar, L. A Rate-Distortion Perspective on Local Differential Privacy. In Proceedings of the Fiftieth Annual Allerton Conference, Monticello, IL, USA, 1–3 October 2014; pp. 903–908. [Google Scholar]
- du Pin Calmon, F.; Fawaz, N. Privacy against Statistical Inference. In Proceedings of the Fiftieth Annual Allerton Conference, Monticello, IL, USA, 1–5 October 2012; pp. 1401–1408. [Google Scholar]
- Pastore, A.; Gastpar, M. Locally Differentially Private Randomized Response for Discrete Distribution Learning. J. Mach. Learn. Res.
**2021**, 22, 1–56. [Google Scholar] - Naveiro, R.; Redondo, A.; Rios Insua, D.; Ruggeri, F. Adversarial Classification: An adversarial risk analysis. Int. J. Approx. Reason.
**2019**, 113, 133–148. [Google Scholar] [CrossRef][Green Version] - Insua, I.R.; Rios, J.; Banks, D. Adversarial Risk Analysis. J. Am. Stat. Assoc.
**2009**, 104, 841–854. [Google Scholar] [CrossRef] - Lowd, D.; Meek, C. Adversarial Learning. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining KDD’05, Chicago, IL, USA, 21–24 August 2005; pp. 641–647. [Google Scholar]
- Dalvi, N.; Domingos, P.; Mausam; Sanghai, S.; Verma, D. Adversarial Classification. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD’04, Seattle, WA, USA, 22–25 August 2004; pp. 99–108. [Google Scholar] [CrossRef][Green Version]
- Neyman, J.; Pearson, E. On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philos. Trans. R. Soc. A
**1933**, 231, 289–337. [Google Scholar] - Cover, T.; Thomas, J.A. Elements of Information Theory; Wiley Series in Telecommunications; Wiley: Hoboken, NJ, USA, 1991. [Google Scholar]
- Gil, M.; Alajaji, F.; Linder, T. Renyi Divergence Measures for Commonly Used Univariate Continuous Distributions. Inf. Sci.
**2013**, 249, 124–131. [Google Scholar] [CrossRef]

**Figure 8.**The upper bound (54) on the additional data’s variance vs. the chi-square table values for various values of $\theta $.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ünsal, A.; Önen, M.
Calibrating the Attack to Sensitivity in Differentially Private Mechanisms. *J. Cybersecur. Priv.* **2022**, *2*, 830-852.
https://doi.org/10.3390/jcp2040042

**AMA Style**

Ünsal A, Önen M.
Calibrating the Attack to Sensitivity in Differentially Private Mechanisms. *Journal of Cybersecurity and Privacy*. 2022; 2(4):830-852.
https://doi.org/10.3390/jcp2040042

**Chicago/Turabian Style**

Ünsal, Ayşe, and Melek Önen.
2022. "Calibrating the Attack to Sensitivity in Differentially Private Mechanisms" *Journal of Cybersecurity and Privacy* 2, no. 4: 830-852.
https://doi.org/10.3390/jcp2040042