Synergizing local and global models for matrix approximation
Chao Chen, Hao Zhang, et al.
CIKM 2019
Matrix approximation (MA) is one of the most popular techniques in today's recommender systems. In most MA-based recommender systems, the problem of risk minimization should be defined, and how to achieve minimum expected risk in model learning is one of the most critical problems to recommendation accuracy. This paper addresses the expected risk minimization problem, in which expected risk can be bounded by the sum of optimization error and generalization error. Based on the uniform stability theory, we propose an expected risk minimized matrix approximation method (ERMMA), which is designed to achieve better tradeoff between optimization error and generalization error in order to reduce the expected risk of the learned MA models. Theoretical analysis shows that ERMMA can achieve lower expected risk bound than existing MA methods. Experimental results on the MovieLens and Netflix datasets demonstrate that ERMMA outperforms six state-of-the-art MA-based recommendation methods in both rating prediction problem and item ranking problem.
Chao Chen, Hao Zhang, et al.
CIKM 2019
Jiaming Cui, Tun Lu, et al.
CSCWD 2019
Bo Jin, Haoyu Yang, et al.
AAAI 2017
Wanlu Shi, Tun Lu, et al.
CSCWD 2017