Privacy Loss Classes: The Central Limit Theorem in Differential Privacy

David M. Sommer, Sebastian Meiser, Esfandiar Mohammadi

Abstract

Quantifying the privacy loss of a privacy-preserving mechanism on potentially sensitive data is a complex and well-researched topic; the de-facto standard for privacy measures are ε-differential privacy (DP) and its versatile relaxation (ε, δ)-approximate differential privacy (ADP). Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation. In this paper we unify many previous works via the privacy loss distribution (PLD) of a mechanism. We show that for non-adaptive mechanisms, the privacy loss under sequential composition undergoes a convolution and will converge to a Gauss distribution (the central limit theorem for DP). We derive several relevant insights: we can now characterize mechanisms by their privacy loss class, i.e., by the Gauss distribution to which their PLD converges, which allows us to give novel ADP bounds for mechanisms based on their privacy loss class; we derive exact analytical guarantees for the approximate randomized response mechanism and an exact analytical and closed formula for the Gauss mechanism, that, given ε, calculates δ, s.t., the mechanism is (ε, δ)-ADP (not an over-approximating bound).
OriginalspracheEnglisch
ZeitschriftProceedings on Privacy Enhancing Technologies
Jahrgang2019
Ausgabenummer2
Seiten (von - bis)245-269
Seitenumfang25
ISSN2299-0984
DOIs
PublikationsstatusVeröffentlicht - 2019

Zitieren