Abstract
Traditional collaborative filtering (CF) based recommender systems tend to perform poorly when the user-item interactions/ratings are highly scarce. To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback. The proposed framework consists of a "recommender" and a "virtual user". The "recommender" is formulated as a CF model, recommending items according to observed user preference. The "virtual user" estimates rewards from the recommended items and generates a \emph{feedback} in addition to the observed user preference. The "recommender" connected with the "virtual user" constructs a closed loop, that recommends users with items and imitates the \emph{unobserved} feedback of the users to the recommended items. The synthetic feedback is used to augment the observed user preference and improve recommendation results. Theoretically, such model design can be interpreted as inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to enrich the learning of user preference and boost the performance of existing collaborative filtering methods on multiple datasets.
Original language | English (US) |
---|---|
Journal | Arxiv preprint |
State | Published - Oct 21 2019 |
Externally published | Yes |
Keywords
- cs.IR
- cs.LG
- stat.ML