Table of Content

Open Access iconOpen Access



Adversarial Attacks on Content-Based Filtering Journal Recommender Systems

Zhaoquan Gu1, Yinyin Cai1, Sheng Wang1, Mohan Li1, *, Jing Qiu1, Shen Su1, Xiaojiang Du1, Zhihong Tian1

1 Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China.
2 Department of Computer and Information Sciences, Temple University, Philadelphia, USA.

* Corresponding Author: Mohan Li. Email: email.

Computers, Materials & Continua 2020, 64(3), 1755-1770.


Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks against content-based filtering JRS. We propose both targeted attack method that makes some target journals appear more often in the system and non-targeted attack method that makes the system provide incorrect recommendations. We also conduct extensive experiments to validate the proposed methods. We hope this paper could help improve JRS by realizing the existence of such adversarial attacks.


Cite This Article

Z. Gu, Y. Cai, S. Wang, M. Li, J. Qiu et al., "Adversarial attacks on content-based filtering journal recommender systems," Computers, Materials & Continua, vol. 64, no.3, pp. 1755–1770, 2020.


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2121


  • 1546


  • 0


Related articles

Share Link