>>

Séminaire MIDI : Jérôme Fellus

Titre du séminaire et orateur

Privacy protection of predictive machine learning models in exploitation phase with empirical max-entropy constraint.

Jérôme Fellus (IRISA, Rennes)

Date et lieu

Jeudi 21 février 2019, 14h

ENSEA, salle 318

Abstract

On a regular basis, people consent to the processing of their personal data by entities that use some form of predictive learning (e.g., recommenders, social networks, state administration). Predictors are functions estimated from empirical data and subsequently exploited on unseen inputs. Much attention has been put on limiting the privacy impact of training and releasing predictors as statistics involving sensitive variables. This is typically the scope of differential privacy (DP). Less focus has been devoted to privacy leaks in exploitation phase, where the learned model is used to make predictions relating to unseen individuals. While one might expect that privacy guarantees about the learning algorithm will automatically extend to the predictions made by the learned model, this is not natively handled by DP. We show that even under a strong DP assumption, a predictor learned naively will leak sensitive information during exploitation. We establish new vulnerability measures for this scenario, and propose an empirical mitigation strategy that leverage “Do Not Predict” examples to maximize entropy of the predictor w.r.t sensitive variables. Interestingly, this mechanism circumvent the usual privacy-utility tradeoff and acts as a regularizer offering better generalization capacity to the predictor.

Bio

I’m a postdoc in IRISA CIDRE team in Rennes, working on ANR project PAMELA (Personalized and decentrAlized MachinE Learning under constrAints) on privacy-preserving distributed machine-learning. I received my PhD from University of Cergy-Pontoise on Asynchronous Gossip algorithms for large-scale machine learning and multimedia retrieval.

Retour