Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Personalized Representation With Contrastive Loss for Recommendation Systems

Published: 01 January 2024 Publication History

Abstract

Sequential recommendation mines the user's interaction sequence or time information to get better recommendations and thus is gaining more and more attention. Existing sequential recommendations tend to build new models, and the study of the loss function is seriously neglected. Despite the increasing attention paid to contrastive learning recently, we believe that the key to contrastive learning is contrastive loss(CL), which also provides a new option for sequential recommendation. However, we find it works against the personalized representation of features. First, it is a relative constraint that keeps positive and negative samples away from each other but without an absolute constraint. Second, recent studies have shown that all embeddings should be uniformly distributed. However, CL only widens the distance of positive and negative samples within the training batch, rather than making a uniform distribution of all items. These two shortcomings make the embedding space too compact, which is harmful to personalized representation and recommendation. Therefore, this article proposes Personalized Contrastive Loss (PCL) to combine CL with absolute constraints of BCE/CE and employs regularization methods to make the representations uniformly distributed. State-of-the-art results are obtained in experiments on several commonly used datasets. The code and data will be available on GitHub.

References

[1]
R. Chen et al., “A survey of collaborative filtering-based recommender systems: From traditional methods to hybrid methods based on social networks,” IEEE Access, vol. 6, pp. 64301–64320, 2018.
[2]
Y. Wu, K. Li, G. Zhao, and X. Qian, “Personalized long- and short-term preference learning for next POI recommendation,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 4, pp. 1944–1957, Apr. 2022.
[3]
Y. Wu et al., “State graph reasoning for multimodal conversational recommendation,” IEEE Trans. Multimedia, early access, Mar. 03, 2022.
[4]
S. Wang et al., “Sequential recommender systems: Challenges, progress and prospects,” in Proc. Int. Joint Conf. Artif. Intell., 2019, pp. 6332–6338.
[5]
W.-C. Kang and J. McAuley, “Self-attentive sequential recommendation,” in Proc. IEEE Int. Conf. Data Mining, 2018, pp. 197–206.
[6]
J. Li, Y. Wang, and J. McAuley, “Time interval aware self-attention for sequential recommendation,” in Proc. ACM Int. Conf. Web Search Data Mining, 2020, pp. 322–330.
[7]
X. Xie et al., “Contrastive learning for sequential recommendation,” in Proc. IEEE Int. Conf. Data Eng., 2022, pp. 1259–1273.
[8]
D. Wang et al., “Modeling sequential listening behaviors with attentive temporal point process for next and next new music recommendation,” IEEE Trans. Multimedia, vol. 24, pp. 4170–4182, 2022.
[9]
Y. Ding, Y. Ma, Wai Keung Wong, and T-S Chua, “Modeling instant user intent and content-level transition for sequential fashion recommendation,” IEEE Trans. Multimedia, vol. 24, pp. 2687–2700, 2022.
[10]
J. Hao, Y. Dun, G. Zhao, Y. Wu, and X. Qian, “Annular-graph attention model for personalized sequential recommendation,” IEEE Trans. Multimedia, vol. 24, pp. 3381–3391, 2022.
[11]
G. Chen, X. Zhang, Y. Zhao, C. Xue, and J. Xiang, “Exploring periodicity and interactivity in multi-interest framework for sequential recommendation,” in Proc. Int. Joint Conf. Artif. Intell., 2021, pp. 1426–1433.
[12]
Z. Xie et al., “Adversarial and contrastive variational autoencoder for sequential recommendation,” in Proc. Web Conf., 2021, pp. 449–459.
[13]
Z. Fan et al., “Continuous-time sequential recommendation with temporal graph collaborative transformer,” in Proc. 30th ACM Int. Conf. Inf. & Knowl. Manage., 2021, pp. 433–442.
[14]
F. Sun et al., “BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer,” in Proc. 28th ACM Int. Conf. Inf. & Knowl. Manage., 2019, pp. 1441–1450.
[15]
X. Fan et al., “Lighter and better: Low-rank decomposed self-attention networks for next-item recommendation,” in Proc. 44th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2021, pp. 1733–1737.
[16]
R. Qiu, Z. Huang, H. Yin, and Z. Wang, “Contrastive learning for representation degeneration problem in sequential recommendation,” in Proc. 15th ACM Int. Conf. Web Search & Data Mining, 2022, pp. 813–823.
[17]
Y. Chen et al., “Intent contrastive learning for sequential recommendation,” in Proc. ACM Web Conf., 2022, pp. 2172–2182.
[18]
X. Tong, P. Wang, C. Li, L. Xia, and S.Z. Niu, “Pattern-enhanced contrastive policy learning network for sequential recommendation,” in Proc. 13th Int. Joint Conf. Art. Intell., 2021, pp. 1593–1599.
[19]
H. Tang, G. Zhao, Y. Wu, and X. Qian, “Multisample-based contrastive loss for top-k recommendation,” IEEE Trans. Multimedia, vol. 25, pp. 339–351, 2023.
[20]
H. Tang, G. Zhao, Y. He, Y. Wu, and X. Qian, “Ranking-based contrastive loss for recommendation systems,” Knowl. Based Syst., vol. 261, 2023, Art. no.
[21]
J. Wu et al., “On the effectiveness of sampled softmax loss for item recommendation,” CoRR, 2022, arXiv:2201.02327.
[22]
L. Chen, L. Wu, R. Hong, K. Zhang, and M. Wang, “Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 27–34.
[23]
T. Wang and P. Isola, “Understanding contrastive representation learning through alignment and uniformity on the hypersphere,” in Proc. 37th Int. Conf. Mach. Learn., 2020, pp. 9929–9939.
[24]
H. Shi et al., “Revisiting over-smoothing in BERT from the perspective of graph,” in Proc. Int. Conf. Learn. Representations, 2022.
[25]
S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme, “Factorizing personalized markov chains for next-basket recommendation,” in Proc. 19th Int. Conf. World Wide Web, 2010, pp. 811–820.
[26]
R. He and J. McAuley, “Fusing similarity models with markov chains for sparse sequential recommendation,” in Proc. IEEE 16th Int. Conf. Data Mining, 2016, pp. 191–200.
[27]
B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-based recommendations with recurrent neural networks,” in Proc. Int. Conf. Learn. Representations, 2016.
[28]
B. Hidasi, M. Quadrana, A. Karatzoglou, and D. Tikk, “Parallel recurrent neural network architectures for feature-rich session-based recommendations,” in Proc. 10th ACM Conf. Recommender Syst., 2016, pp. 241–248.
[29]
J. Tang and K. Wang, “Personalized Top-N sequential recommendation via convolutional sequence embedding,” in Proc. 11th ACM Int. Conf. Web Search Data Mining, 2018, pp. 565–573.
[30]
F. Yuan, A. Karatzoglou, I. Arapakis, J. M. Jose, and X. He, “A simple convolutional generative network for next item recommendation,” in Proc. 12th ACM Int. Conf. Web Search & Data Mining, 2019, pp. 582–590.
[31]
J. Chang et al., “Sequential recommendation with graph neural networks,” in Proc. 44th Int. ACM SIGIR Conf. Res. & Develop. Inf. Retrieval, 2021, pp. 378–387.
[32]
Y. Wei et al., “MMGCN: Multi-modal graph convolution network for personalized recommendation of micro-video,” in Proc. ACM Multimedia, 2019, pp. 1437–1445.
[33]
C. Ma et al., “Memory augmented graph neural networks for sequential recommendation,” in Proc. AAAI Conf. Artif. Intell., 2020, pp. 5045–5052.
[34]
Y. Wei et al., “Hierarchical user intent graph network for multimedia recommendation,” IEEE Trans. Multimedia, vol. 24, pp. 2701–2712, 2022.
[35]
H. Liu et al., “Hamming spatial graph convolutional networks for recommendation,” IEEE Trans. Knowl. Data Eng., vol. 24, pp. 2701–2712, 2022.
[36]
Z. Liu et al., “Contrastive self-supervised sequential recommendation with robust augmentation,” CoRR, 2021, arXiv:2108.06479.
[37]
S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “BPR: Bayesian personalized ranking from implicit feedback,” in Proc. 25th Conf. Uncertainty Art. Intell., 2009, pp. 452–461.
[38]
T. Chen, S. Kornblith, M. Norouzi, and G. E. Hinton, “A simple framework for contrastive learning of visual representations,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 1597–1607.
[39]
P. Khosla et al., “Supervised contrastive learning,” in Proc. Int. Adv. Conf. Neural Inf. Process. Syst., 2020.
[40]
X. Song and Z. Jin, “Robust label rectifying with consistent contrastive-learning for domain adaptive person re-identification,” IEEE Trans. Multimedia, vol. 24, pp. 3229–3239, 2022.
[41]
S. Chang, Y. Li, S. Shen, J. Feng, and Z. Zhou, “Contrastive attention for video anomaly detection,” IEEE Trans. Multimedia, vol. 24, pp. 4067–4076, 2022.
[42]
X. Huo et al., “Heterogeneous contrastive learning: Encoding spatial information for compact visual representations,” IEEE Trans. Multimedia., vol. 24, pp. 4224–4235, 2022.
[43]
R. Qiu, Z. Huang, and H. Yin, “Memory augmented multi-instance contrastive predictive coding for sequential recommendation,” in Proc. IEEE Int. Conf. Data Mining Workshops, 2021, pp. 519–528.
[44]
A. Vaswani et al., “Attention is all you need,” in Proc. Int. Adv. Conf. Neural Inf. Process. Syst., 2017, pp. 5998–6008.
[45]
K. Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” in Proc. Int. Adv. Conf. Neural Inf. Process. Syst., 2016, pp. 1857–1865.
[46]
A. van den Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” CoRR, 2018, arXiv:1807.03748.
[47]
Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 3733–3742.
[48]
K. Zhou et al., “S3-Rec: Self-supervised learning for sequential recommendation with mutual information maximization,” in Proc. 29th ACM Int. Conf. Inf. & Knowl. Manage., 2020, pp. 1893–1902.
[49]
F. M. Harper and J. A. Konstan, “The Movielens datasets: History and context,” ACM Trans. Interact. Intell. Syst., vol. 5, no. 4, pp. 19:1–19:19, 2016.
[50]
W. Krichene and S. Rendle, “On sampled metrics for item recommendation,” in Proc.26th ACM SIGKDD Int. Conf. KDD, 2020, pp. 1748–1757.
[51]
J. Li et al., “Neural attentive session-based recommendation,” in Proc. ACM Conf. Inf. & Knowl. Manage., 2017, pp. 1419–1428.
[52]
H. Yuan et al., “Sequential recommendation with probabilistic logical reasoning,” Apr. 2023, arXiv:2304.11383.
[53]
T. Nguyen and A. Takasu, “NPE: Neural personalized embedding for collaborative filtering,” in Proc. Int. Joint Conf. Artif. Intell., 2018, pp. 1583–1589.
[54]
K. Zhou, H. Yu, W. X. Zhao, and J-R Wen, “Filter-enhanced MLP is all you need for sequential recommendation,” in Proc. ACM Web Conf., 2022, pp. 2388–2399.

Cited By

View all
  • (2024)Inter- and Intra-Domain Potential User Preferences for Cross-Domain RecommendationIEEE Transactions on Multimedia10.1109/TMM.2024.337457726(8014-8025)Online publication date: 12-Mar-2024

Index Terms

  1. Personalized Representation With Contrastive Loss for Recommendation Systems
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Multimedia
          IEEE Transactions on Multimedia  Volume 26, Issue
          2024
          11427 pages

          Publisher

          IEEE Press

          Publication History

          Published: 01 January 2024

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 05 Mar 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Inter- and Intra-Domain Potential User Preferences for Cross-Domain RecommendationIEEE Transactions on Multimedia10.1109/TMM.2024.337457726(8014-8025)Online publication date: 12-Mar-2024

          View Options

          View options

          Figures

          Tables

          Media

          Share

          Share

          Share this Publication link

          Share on social media