Learning compact yet accurate Generative Adversarial Networks for recommender systems

作者:

Highlights:

摘要

Recently, Generative Adversarial Networks (GANs) have received much attention in recommender systems because they can capture complex data distributions. They rely on two sub-networks, generator and discriminator models, to generate ‘fake’ yet reliable data, so better recommendation accuracy can be obtained. However, most variants of existing GANs achieve accuracy improvements by greatly increasing the model complexity, especially the amount of parameters, leading to difficulty in the deployment of those methods. Therefore, we aim to resolve this issue by learning compact yet accurate GANs that can make up valuable data with less generator parameters. To this end, we integrate knowledge distillation (KD, in the form of a teacher–student architecture) into GANs to reduce the model complexity while improving the accuracy of GANs. To the best of our knowledge, this work is the first to learn compact GANs for recommender systems by applying a KD framework named StuGAN. Specifically, we use a student discriminator to refine the teacher’s knowledge, and then both the generator and discriminator are enhanced by leveraging the refined knowledge via adversarial learning, which constrains the generator to produce fake data approximating both the ground truth and teacher’s predicted preferences, and enables the discriminator to distinguish between the preferences of the generator and more confusing preferences of the teacher from the ground truth. Finally, we conduct extensive experiments on two real-world datasets (i.e., Ciao and LastFM), and the results show that our approach can reduce the volume of model parameters by as much as half while maintaining comparable recommendation accuracy.

论文关键词:Knowledge distillation,Generative Adversarial Networks,Model compression,Recommender system

论文评审过程:Received 24 March 2022, Revised 25 August 2022, Accepted 13 September 2022, Available online 17 September 2022, Version of Record 6 October 2022.

论文官网地址:https://doi.org/10.1016/j.knosys.2022.109900