Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (13)
  • Open Access

    ARTICLE

    Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

    Bader Rasheed1, Adil Khan1, S. M. Ahsan Kazmi2, Rasheed Hussain2, Md. Jalil Piran3,*, Doug Young Suh4

    CMC-Computers, Materials & Continua, Vol.68, No.1, pp. 921-939, 2021, DOI:10.32604/cmc.2021.015452

    Abstract Detecting malicious Uniform Resource Locators (URLs) is crucially important to prevent attackers from committing cybercrimes. Recent researches have investigated the role of machine learning (ML) models to detect malicious URLs. By using ML algorithms, first, the features of URLs are extracted, and then different ML models are trained. The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL. Therefore, deep learning (DL) models are used to solve these issues since they are able to perform featureless detection. Furthermore, DL models give better accuracy and generalization to newly… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on License Plate Recognition Systems

    Zhaoquan Gu1, Yu Su1, Chenwei Liu1, Yinyu Lyu1, Yunxiang Jian1, Hao Li2, Zhen Cao3, Le Wang1, *

    CMC-Computers, Materials & Continua, Vol.65, No.2, pp. 1437-1452, 2020, DOI:10.32604/cmc.2020.011834

    Abstract The license plate recognition system (LPRS) has been widely adopted in daily life due to its efficiency and high accuracy. Deep neural networks are commonly used in the LPRS to improve the recognition accuracy. However, researchers have found that deep neural networks have their own security problems that may lead to unexpected results. Specifically, they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images, resulting in incorrect license plate recognition. There are some classic methods to generate adversarial examples, but they cannot be adopted on LPRS directly. In this paper,… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Content-Based Filtering Journal Recommender Systems

    Zhaoquan Gu1, Yinyin Cai1, Sheng Wang1, Mohan Li1, *, Jing Qiu1, Shen Su1, Xiaojiang Du1, Zhihong Tian1

    CMC-Computers, Materials & Continua, Vol.64, No.3, pp. 1755-1770, 2020, DOI:10.32604/cmc.2020.010739

    Abstract Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks against content-based filtering JRS. We… More >

Displaying 11-20 on page 2 of 13. Per Page