Abstract
Recommendation systems rely heavily on behavioural and preferential data (e.g. ratings and likes) of a user to produce accurate recommendations. However, such unethical data aggregation and analytical practices of Service Providers (SP) causes privacy concerns among users. Local differential privacy (LDP) based perturbation mechanisms address this concern by adding noise to users' data at the user-side before sending it to the SP. The SP then uses the perturbed data to perform recommendations. Although LDP protects the privacy of users from SP, it causes a substantial decline in recommendation accuracy. We propose an LDP-based Matrix Factorization (MF) with a Gaussian Mixture Model (MoG) to address this problem. The LDP perturbation mechanism, i.e., Bounded Laplace (BLP), regulates the effect of noise by confining the perturbed ratings to a predetermined domain. We derive a sufficient condition of the scale parameter for BLP to satisfy ε -LDP. We use the MoG model at the SP to estimate the noise added locally to the ratings and the MF algorithm to predict missing ratings. Our LDP based recommendation system improves the predictive accuracy without violating LDP principles. We demonstrate that our method offers a substantial increase in recommendation accuracy under a strong privacy guarantee through empirical evaluations on three real-world datasets, i.e., Movielens, Libimseti and Jester.
Original language | English |
---|---|
Pages (from-to) | 4151-4163 |
Number of pages | 12 |
Journal | IEEE Transactions on Knowledge and Data Engineering |
Volume | 35 |
Issue number | 4 |
Early online date | 9 Nov 2021 |
DOIs | |
Publication status | Published - 1 Apr 2023 |
Keywords
- Data Privacy
- Gaussian Mixture Model
- Local Differential Privacy
- Recommendation Systems
- Differential privacy
- Privacy
- Perturbation methods
- Data aggregation
- Prediction algorithms
- Data models
- Gaussian mixture model