Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.079504
Special Issues
Table of Content

Open Access

REVIEW

The Semantic Design Space of Retrieval-Augmented Recommender Systems: A Systematic Review of LLM-Based Approaches

Minhyeok Choi1, Imran Ahsan2, Hyunwook Yu1, Taeyoung Choe1, Mucheol Kim1,*
1 Department of Computer Science and Engineering, Chung-Ang University, Seoul, Republic of Korea
2 Department of Smart Cities, Chung-Ang University, Seoul, Republic of Korea
* Corresponding Author: Mucheol Kim. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.079504

Received 22 January 2026; Accepted 26 March 2026; Published online 22 April 2026

Abstract

Large language models (LLMs) are increasingly integrated into recommender systems to support semantic reasoning, natural language understanding, and user-adaptive personalization. However, their reliance on static parametric knowledge and fixed representations limits robustness in dynamic environments, particularly under long-tail and cold-start conditions. Retrieval-augmented architectures have emerged to address these limitations by grounding LLMs in external, non-parametric knowledge sources. This systematic literature review synthesizes 138 peer-reviewed studies published between 2023 and 2025 in conferences and journals, focusing on retrieval-augmented and LLM-enhanced recommendation. We analyze these works through a three-dimensional framework covering: (i) domain application, (ii) semantic feature and representation design, and (iii) algorithmic strategies for retrieval and personalization. The review shows that current research is concentrated in general recommendation and information retrieval, that similarity/retrieval, user-item interaction, and textual content signals dominate semantic modeling, and that LLM and BERT-style encoders form the primary representation backbones, while graph-based, multimodal, and hybrid approaches remain comparatively underexplored. Algorithmically, most systems adopt generic LLM-centric modeling with limited use of retrieval optimization, reinforcement learning, or structure-aware strategies, and only sporadic attention to explicit cold-start, hallucination, and robustness treatment. By mapping co-occurrence patterns between domains, semantic features, representation choices, and strategy families, this review identifies concrete gaps and transfer opportunities for future work on retrieval-augmented recommendation and provides a structured reference for designing more context-aware, explainable, and data-efficient LLM-based recommender systems.

Keywords

Large language models (LLMs); recommender system; retrieval-augmented generation (RAG); semantic features
  • 185

    View

  • 32

    Download

  • 0

    Like

Share Link