Open Access iconOpen Access

ARTICLE

CIT-Rec: Enhancing Sequential Recommendation System with Large Language Models

Ziyu Li1, Zhen Chen2, Xuejing Fu2, Tong Mo1,*, Weiping Li1

1 School of Software and Microelectronics, Peking University, Beijing, 100871, China
2 Information Application Research Center of Shanghai Municipal Administration for Market Regulation, Shanghai, 200032, China

* Corresponding Author: Tong Mo. Email: email

Computers, Materials & Continua 2026, 86(3), 100 https://doi.org/10.32604/cmc.2025.071994

Abstract

Recommendation systems are key to boosting user engagement, satisfaction, and retention, particularly on media platforms where personalized content is vital. Sequential recommendation systems learn from user-item interactions to predict future items of interest. However, many current methods rely on unique user and item IDs, limiting their ability to represent users and items effectively, especially in zero-shot learning scenarios where training data is scarce. With the rapid development of Large Language Models (LLMs), researchers are exploring their potential to enhance recommendation systems. However, there is a semantic gap between the linguistic semantics of LLMs and the collaborative semantics of recommendation systems, where items are typically indexed by IDs. Moreover, most research focuses on item representations, neglecting personalized user modeling. To address these issues, we propose a sequential recommendation framework using LLMs, called CIT-Rec, a model that integrates Collaborative semantics for user representation and Image and Text information for item representation to enhance Recommendations. Specifically, by aligning intuitive image information with text containing semantic features, we can more accurately represent items, improving item representation quality. We focus not only on item representations but also on user representations. To more precisely capture users’ personalized preferences, we use traditional sequential recommendation models to train on users’ historical interaction data, effectively capturing behavioral patterns. Finally, by combining LLMs and traditional sequential recommendation models, we allow the LLM to understand linguistic semantics while capturing collaborative semantics. Extensive evaluations on real-world datasets show that our model outperforms baseline methods, effectively combining user interaction history with item visual and textual modalities to provide personalized recommendations.

Keywords

Large language models; vision language models; sequential recommendation; instruction tuning

Cite This Article

APA Style
Li, Z., Chen, Z., Fu, X., Mo, T., Li, W. (2026). CIT-Rec: Enhancing Sequential Recommendation System with Large Language Models. Computers, Materials & Continua, 86(3), 100. https://doi.org/10.32604/cmc.2025.071994
Vancouver Style
Li Z, Chen Z, Fu X, Mo T, Li W. CIT-Rec: Enhancing Sequential Recommendation System with Large Language Models. Comput Mater Contin. 2026;86(3):100. https://doi.org/10.32604/cmc.2025.071994
IEEE Style
Z. Li, Z. Chen, X. Fu, T. Mo, and W. Li, “CIT-Rec: Enhancing Sequential Recommendation System with Large Language Models,” Comput. Mater. Contin., vol. 86, no. 3, pp. 100, 2026. https://doi.org/10.32604/cmc.2025.071994



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 130

    View

  • 27

    Download

  • 0

    Like

Share Link