- 2019
- MEDINFO 2019
Privacy Enhancing Technologies for Regulatory Compliance
Overview
Privacy protection is central for trust in AI systems. While regulations like GDPR often dictate the flow of data across these systems, the rapid advancements in AI often fail to catch up with the demands of privacy. There are many emerging technologies collectively referred to as Privacy Enhancing Technologies (PETs) that help address the challenges around privacy. These include approaches like Differential Privacy and Federated Learning that enable technologies that can provide better utility in comparison to traditional data protection techniques, whilst simultaneously adhering to privacy requirements. At the Dublin Research Lab, we research these key areas in the privacy landscape and develop privacy enhancing technologies that are applicable across a wide range of use cases.
While PETs are promising tools for developing AI systems, it is vital that they are examined through a regulatory lens. Contrastingly, it is essential that the developed AI systems are compliant with the many emerging regulations like EU Digital Acts and EU AI Act. However, the constantly evolving nature of AI and the demands of regulations make this a challenging task.
Machine Unlearning
For instance, consider the "right to be forgotten", which empowers users to ask for their data to be removed. This has ramifications for AI systems where the contributions of the data samples into parameters during the training isn’t explicit and retraining the AI models by removing the data isn’t always feasible. Our research in Machine Unlearning is aimed at tackling these challenges to develop approaches for efficient removal of data contributions from trained systems.
Data Privacy Risk Assessment
Similarly, one of the main blockers for the exploitation of datasets relates to the challenge for developers and data privacy officers (DPOs) in understanding the nature and the privacy vulnerabilities that exist in their data. To this end, we are exploring Data Privacy Risk Assessment technologies from several points of view. Such technologies assist DPOs with the classification of datasets, via semantic type statistical classification, and in the identification of such privacy vulnerabilities. Moreover, these technologies help to understand the impact on data “utility” of various privacy policies, both from a theoretical, and applied, point of view.
Differential Privacy
Through the addition of noise in the training process, Differential Privacy can help protect against unwanted inference on individuals’ sensitive data, and can be applied to data pipelines in concert with other PETs, such as Federated Learning and Data Privacy Risk Assessment outlined above. Our research in this area includes investigating its applicability and application to Federated Learning and Deep Learning, as well as more fundamental issues around its implementation in floating point environments of modern-day computers.
We are currently participating in the FLUIDOS and AI4Media EU Horizon projects, where we apply our privacy enhancing technologies in the creation of a seamless continuum between edge and cloud.
Technical Resources
Publications
- Aris Gkoulalas-Divanis
- Stefano Braghin
- 2016
- IBM J. Res. Dev
- Simone Bottoni
- Giulio Zizzo
- et al.
- 2022
- NeurIPS 2022
- Stefano Braghin
- Marco Simioni
- et al.
- 2022
- Cloud S&P 2022
- 2022
- NeurIPS 2022
- Nathalie Baracaldo Angel
- Ali Anwar
- et al.
- 2022
- arXiv
- Dian Balta
- Mahdi Sellami
- et al.
- 2021
- ePart 2021