Abstract
We evaluate the information that can unintentionally leak into the low dimensional output of a neural network, by reconstructing an input image from a 40- or 32-element feature vector that intends to only describe abstract attributes of a facial portrait. The reconstruction uses blackbox-access to the image encoder which generates the feature vector. Other than previous work, we leverage recent knowledge about image generation and facial similarity, implementing a method that outperforms the current state-of-the-art. Our strategy uses a pretrained StyleGAN and a new loss function that compares the perceptual similarity of portraits by mapping them into the latent space of a FaceNet embedding. Additionally, we present a new technique that fuses the output of an ensemble, to deliberately generate specific aspects of the recreated image.
| Original language | German |
|---|---|
| Title of host publication | Lecture Notes in Computer Science : (LNCS) |
| Number of pages | 177 |
| Volume | 15016 |
| Publisher | Springer, Cham |
| Publication date | 17.09.2024 |
| Pages | 163 |
| ISBN (Print) | 978-3-031-72331-5 |
| ISBN (Electronic) | 978-3-031-72332-2 |
| Publication status | Published - 17.09.2024 |