Snapcase - Regain Control over Your Predictions with Low-Latency Machine Unlearning

Abstract

The ‘right-to-be-forgotten’ requires the removal of personal data from trained machine learning (ML) models with machine unlearning. Conducting such unlearning with low latency is crucial for responsible data management, for example in a scenario where a person suffering from alcohol addiction decides to stop drinking, but would still be exposed to ads and recommendations for alcohol by ML models which learned their past consumption behavior. Low-latency unlearning is challenging, but possible for certain classes of ML models when treating them as ‘materialised views’ over training data, with carefully chosen operations and data structures for computing updates. We present Snapcase, a recommender system which can unlearn user interactions with sub-second latency on a large grocery shopping dataset with 33 million purchases and 200 thousand users. Its implementation is based on incremental view maintenance with Differential Dataflow and a custom algorithm and data structure for maintaining a top-k aggregation over the result of a sparse matrix-matrix multiplication. We demonstrate how interactive low-latency unlearning empowers users in critical scenarios to get rid of sensitive items in their recommendations and to drastically reduce their data’s negative influence on other users’ predictions.

Publication
International Conference on Very Large Databases (VLDB, demo)
Date
Links