Learning Privacy from Visual Entities
Subjective interpretation and content diversity make predicting whether an image is private or public a challenging task. Graph neural networks combined with convolutional neural networks (CNNs), which consist of 14,000 to 500 millions parameters, generate features for visual entities (e.g., scene and object types) and identify the entities that contribute to the decision. In this paper, we show that using a simpler combination of transfer learning and a CNN to relate privacy with scene types optimises only 732 parameters while achieving comparable performance to that of graph-based methods. On the contrary, end-to-end training of graph-based methods can mask the contribution of individual components to the classification performance. Furthermore, we show that a high-dimensional feature vector, extracted with CNNs for each visual entity, is unnecessary and complexifies the model. The graph component has also negligible impact on performance, which is driven by fine-tuning the CNN to optimize image features for privacy nodes.
Queen Mary University of London
École Polytechnique Fédérale de Lausanne
2025
2025
3
261
281
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
PETS 2025 | Washington DC, USA | 2025-07-14 - 2025-07-19 | |