20.06.2025 –, ZKM Vortragssaal
Sprache: English
The publication of human-related data is commonly accompanied by a considerable risk of violating privacy rights. This is especially true if repeated requests for the collected data are possible and an intermediary, intelligent agent is used for the protection of people's privacy. A common application case is census data, where the data is not published. For example, we can learn about their private attributes by asking the right questions that single out only a few individuals. These kind of attacks make it challenging to determine whether the released data is privacy sensitive or not.
In this talk, we present a common solution called differential privacy. Differential privacy allows us to provide strong privacy guarantees to anonymization techniques thanks to its mathematical framework.
We provide a beginner-friendly introduction showing applications and limitations of differential privacy. Further on, we discuss the current approach to differential privacy and privacy in general by examining real-world examples. Finally, we also discuss the "privacy washing" that some companies engage in: Not every application that promises differential privacy actually achieves it, nor does every application that achieves differential privacy protect its users sufficiently.
PhD Student at KIT, builds anonymizations, likes bees.
PhD student at KIT