Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (26)
  • Open Access

    ARTICLE

    An Adversarial Attack System for Face Recognition

    Yuetian Wang, Chuanjing Zhang, Xuxin Liao, Xingang Wang, Zhaoquan Gu*

    Journal on Artificial Intelligence, Vol.3, No.1, pp. 1-8, 2021, DOI:10.32604/jai.2021.014175

    Abstract Deep neural networks (DNNs) are widely adopted in daily life and the security problems of DNNs have drawn attention from both scientific researchers and industrial engineers. Many related works show that DNNs are vulnerable to adversarial examples that are generated with subtle perturbation to original images in both digital domain and physical domain. As a most common application of DNNs, face recognition systems are likely to cause serious consequences if they are attacked by the adversarial examples. In this paper, we implement an adversarial attack system for face recognition in both digital domain that generates adversarial face images to fool… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

    Bader Rasheed1, Adil Khan1, S. M. Ahsan Kazmi2, Rasheed Hussain2, Md. Jalil Piran3,*, Doug Young Suh4

    CMC-Computers, Materials & Continua, Vol.68, No.1, pp. 921-939, 2021, DOI:10.32604/cmc.2021.015452

    Abstract Detecting malicious Uniform Resource Locators (URLs) is crucially important to prevent attackers from committing cybercrimes. Recent researches have investigated the role of machine learning (ML) models to detect malicious URLs. By using ML algorithms, first, the features of URLs are extracted, and then different ML models are trained. The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL. Therefore, deep learning (DL) models are used to solve these issues since they are able to perform featureless detection. Furthermore, DL models give better accuracy and generalization to newly… More >

  • Open Access

    ARTICLE

    A Two-Stage Highly Robust Text Steganalysis Model

    Enlu Li1, Zhangjie Fu1,2,3,*, Siyu Chen1, Junfu Chen1

    Journal of Cyber Security, Vol.2, No.4, pp. 183-190, 2020, DOI:10.32604/jcs.2020.015010

    Abstract With the development of natural language processing, deep learning, and other technologies, text steganography is rapidly developing. However, adversarial attack methods have emerged that gives text steganography the ability to actively spoof steganalysis. If terrorists use the text steganography method to spread terrorist messages, it will greatly disturb social stability. Steganalysis methods, especially those for resisting adversarial attacks, need to be further improved. In this paper, we propose a two-stage highly robust model for text steganalysis. The proposed method analyzes and extracts anomalous features at both intra-sentential and inter-sentential levels. In the first phase, every sentence is first transformed into… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on License Plate Recognition Systems

    Zhaoquan Gu1, Yu Su1, Chenwei Liu1, Yinyu Lyu1, Yunxiang Jian1, Hao Li2, Zhen Cao3, Le Wang1, *

    CMC-Computers, Materials & Continua, Vol.65, No.2, pp. 1437-1452, 2020, DOI:10.32604/cmc.2020.011834

    Abstract The license plate recognition system (LPRS) has been widely adopted in daily life due to its efficiency and high accuracy. Deep neural networks are commonly used in the LPRS to improve the recognition accuracy. However, researchers have found that deep neural networks have their own security problems that may lead to unexpected results. Specifically, they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images, resulting in incorrect license plate recognition. There are some classic methods to generate adversarial examples, but they cannot be adopted on LPRS directly. In this paper,… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Content-Based Filtering Journal Recommender Systems

    Zhaoquan Gu1, Yinyin Cai1, Sheng Wang1, Mohan Li1, *, Jing Qiu1, Shen Su1, Xiaojiang Du1, Zhihong Tian1

    CMC-Computers, Materials & Continua, Vol.64, No.3, pp. 1755-1770, 2020, DOI:10.32604/cmc.2020.010739

    Abstract Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks against content-based filtering JRS. We… More >

  • Open Access

    ARTICLE

    A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation

    Deyin Li1, 2, Mingzhi Cheng3, Yu Yang1, 2, *, Min Lei1, 2, Linfeng Shen4

    CMC-Computers, Materials & Continua, Vol.64, No.1, pp. 623-635, 2020, DOI:10.32604/cmc.2020.09800

    Abstract Deep learning networks are widely used in various systems that require classification. However, deep learning networks are vulnerable to adversarial attacks. The study on adversarial attacks plays an important role in defense. Black-box attacks require less knowledge about target models than white-box attacks do, which means black-box attacks are easier to launch and more valuable. However, the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images. This paper proposes a kind of fast black-box attack based on the cross-correlation (FBACC) method. The attack is carried out in two stages.… More >

Displaying 21-30 on page 3 of 26. Per Page