Revealing Unintentional Information Leakage in Low-Dimensional Facial Portrait Representations

Kathleen Anderson*, Thomas Martinetz

*Corresponding author for this work

Abstract

We evaluate the information that can unintentionally leak into the low dimensional output of a neural network, by reconstructing an input image from a 40- or 32-element feature vector that intends to only describe abstract attributes of a facial portrait. The reconstruction uses blackbox-access to the image encoder which generates the feature vector. Other than previous work, we leverage recent knowledge about image generation and facial similarity, implementing a method that outperforms the current state-of-the-art. Our strategy uses a pretrained StyleGAN and a new loss function that compares the perceptual similarity of portraits by mapping them into the latent space of a FaceNet embedding. Additionally, we present a new technique that fuses the output of an ensemble, to deliberately generate specific aspects of the recreated image.

Original languageGerman
Title of host publicationLecture Notes in Computer Science : (LNCS)
Number of pages177
Volume15016
PublisherSpringer, Cham
Publication date17.09.2024
Pages163
ISBN (Print)978-3-031-72331-5
ISBN (Electronic)978-3-031-72332-2
Publication statusPublished - 17.09.2024

Cite this