#6: Purpose-Aware Privacy-Preserving Recommendations with Manel Slokom
Episode number six of Recsperts is about purpose-aware privacy-preserving data for recommender systems. My guest is Manel Slokom, who is a 4th year PhD student at Delft University of Technology. She served as student volunteer at RecSys for three years in a row before becoming student volunteer co-chair herself in 2021. In addition to her work on privacy and fairness, she also dedicates herself to simulation and in particular synthetic data for recommender systems - also co-organizing the 1st SimuRec Workshop as part of RecSys 2021.
In episode number six, we welcome Manel Slokom to the show and talk about purpose-aware privacy-preserving data for recommender systems. Manel is a 4th year PhD student at Delft University of Technology. For three years in a row she served as student volunteer at RecSys - before becoming student volunteer co-chair herself in 2021. Besides working on privacy and fairness, she also dedicates herself to simulation and in particular synthetic data for recommender systems - also co-organizing the 1st SimuRec Workshop as part of RecSys 2021.
This episode is definitely worth a longer run. Manel and I discussed fairness and privacy in recommender systems and how ratings can leak signals about sensitive personal information. For example, classifiers may exploit ratings in order to effectively determine one's gender. She explains "Personalized Blurring", which is the approach she developed to personalize gender obfuscation in user rating data, as well as how this can contribute to more diverse recommendations.
In our discussion, we also touch "data-centric AI", a term recently formulated by Andrew Ng, and how adapting feedback data may yield underestimated effects on recommendations that can lead to "data-centric recommender systems". In addition, we dived into the differences between simulated and synthetic data which brought us to the SimuRec workshop that she co-organized as part of RecSys 2021.
Finally, Manel provides some recommendations for young researcher to become active RecSys community members and benefit from exchange: talk to people and volunteer at RecSys.
Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
This episode is definitely worth a longer run. Manel and I discussed fairness and privacy in recommender systems and how ratings can leak signals about sensitive personal information. For example, classifiers may exploit ratings in order to effectively determine one's gender. She explains "Personalized Blurring", which is the approach she developed to personalize gender obfuscation in user rating data, as well as how this can contribute to more diverse recommendations.
In our discussion, we also touch "data-centric AI", a term recently formulated by Andrew Ng, and how adapting feedback data may yield underestimated effects on recommendations that can lead to "data-centric recommender systems". In addition, we dived into the differences between simulated and synthetic data which brought us to the SimuRec workshop that she co-organized as part of RecSys 2021.
Finally, Manel provides some recommendations for young researcher to become active RecSys community members and benefit from exchange: talk to people and volunteer at RecSys.
Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.
Links from the Episode:
- Manel on Twitter
- Manel on LinkedIn
- Manel at TU Delft (find more papers referenced there)
- SimuRec Workshop at RecSys 2021
- FAccTrec Workshop at RecSys 2021
- Andrew Ng: Unbiggen AI (from IEEE Spectrum)
Papers:
- Slokom et al. (2021): Towards user-oriented privacy for recommender system data: A personalization-based approach to gender obfuscation for user profiles
- Weinsberg et al. (2012): BlurMe: Inferring and Obfuscating User Gender Based on Ratings
- Ekstrand et al. (2018): All The Cool Kids, How Do They Fit In?: Popularity and Demographic Biases in Recommender Evaluation and Effectiveness
- Slokom et al. (2018): Comparing recommender systems using synthetic data
- Burke et al. (2018): Synthetic Attribute Data for Evaluating Consumer-side Fairness
- Burke et al. (2005): Identifying Attack Models for Secure Recommendation
- Narayanan et al. (2008): Robust De-anonymization of Large Sparse Datasets
General Links:
- Follow me on Twitter: https://twitter.com/LivesInAnalogia
- Send me your comments, questions and suggestions to marcel@recsperts.com
- Podcast Website: https://www.recsperts.com/