Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (29)
  • Open Access

    ARTICLE

    VANET Jamming and Adversarial Attack Defense for Autonomous Vehicle Safety

    Haeri Kim1, Jong-Moon Chung1,2,*

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3589-3605, 2022, DOI:10.32604/cmc.2022.023073 - 07 December 2021

    Abstract The development of Vehicular Ad-hoc Network (VANET) technology is helping Intelligent Transportation System (ITS) services to become a reality. Vehicles can use VANETs to communicate safety messages on the road (while driving) and can inform their location and share road condition information in real-time. However, intentional and unintentional (e.g., packet/frame collision) wireless signal jamming can occur, which will degrade the quality of communication over the channel, preventing the reception of safety messages, and thereby posing a safety hazard to the vehicle's passengers. In this paper, VANET jamming detection applying Support Vector Machine (SVM) machine learning… More >

  • Open Access

    ARTICLE

    Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

    Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 2209-2224, 2022, DOI:10.32604/cmc.2022.020111 - 07 December 2021

    Abstract These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely More >

  • Open Access

    ARTICLE

    Restoration of Adversarial Examples Using Image Arithmetic Operations

    Kazim Ali*, Adnan N. Quershi

    Intelligent Automation & Soft Computing, Vol.32, No.1, pp. 271-284, 2022, DOI:10.32604/iasc.2022.021296 - 26 October 2021

    Abstract The current development of artificial intelligence is largely based on deep Neural Networks (DNNs). Especially in the computer vision field, DNNs now occur in everything from autonomous vehicles to safety control systems. Convolutional Neural Network (CNN) is based on DNNs mostly used in different computer vision applications, especially for image classification and object detection. The CNN model takes the photos as input and, after training, assigns it a suitable class after setting traceable parameters like weights and biases. CNN is derived from Human Brain's Part Visual Cortex and sometimes performs even better than Haman visual… More >

  • Open Access

    ARTICLE

    An Adversarial Attack System for Face Recognition

    Yuetian Wang, Chuanjing Zhang, Xuxin Liao, Xingang Wang, Zhaoquan Gu*

    Journal on Artificial Intelligence, Vol.3, No.1, pp. 1-8, 2021, DOI:10.32604/jai.2021.014175 - 02 April 2021

    Abstract Deep neural networks (DNNs) are widely adopted in daily life and the security problems of DNNs have drawn attention from both scientific researchers and industrial engineers. Many related works show that DNNs are vulnerable to adversarial examples that are generated with subtle perturbation to original images in both digital domain and physical domain. As a most common application of DNNs, face recognition systems are likely to cause serious consequences if they are attacked by the adversarial examples. In this paper, we implement an adversarial attack system for face recognition in both digital domain that generates More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

    Bader Rasheed1, Adil Khan1, S. M. Ahsan Kazmi2, Rasheed Hussain2, Md. Jalil Piran3,*, Doug Young Suh4

    CMC-Computers, Materials & Continua, Vol.68, No.1, pp. 921-939, 2021, DOI:10.32604/cmc.2021.015452 - 22 March 2021

    Abstract Detecting malicious Uniform Resource Locators (URLs) is crucially important to prevent attackers from committing cybercrimes. Recent researches have investigated the role of machine learning (ML) models to detect malicious URLs. By using ML algorithms, first, the features of URLs are extracted, and then different ML models are trained. The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL. Therefore, deep learning (DL) models are used to solve these issues since they are able to perform featureless detection. Furthermore, DL models give better… More >

  • Open Access

    ARTICLE

    A Two-Stage Highly Robust Text Steganalysis Model

    Enlu Li1, Zhangjie Fu1,2,3,*, Siyu Chen1, Junfu Chen1

    Journal of Cyber Security, Vol.2, No.4, pp. 183-190, 2020, DOI:10.32604/jcs.2020.015010 - 07 December 2020

    Abstract With the development of natural language processing, deep learning, and other technologies, text steganography is rapidly developing. However, adversarial attack methods have emerged that gives text steganography the ability to actively spoof steganalysis. If terrorists use the text steganography method to spread terrorist messages, it will greatly disturb social stability. Steganalysis methods, especially those for resisting adversarial attacks, need to be further improved. In this paper, we propose a two-stage highly robust model for text steganalysis. The proposed method analyzes and extracts anomalous features at both intra-sentential and inter-sentential levels. In the first phase, every… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on License Plate Recognition Systems

    Zhaoquan Gu1, Yu Su1, Chenwei Liu1, Yinyu Lyu1, Yunxiang Jian1, Hao Li2, Zhen Cao3, Le Wang1, *

    CMC-Computers, Materials & Continua, Vol.65, No.2, pp. 1437-1452, 2020, DOI:10.32604/cmc.2020.011834 - 20 August 2020

    Abstract The license plate recognition system (LPRS) has been widely adopted in daily life due to its efficiency and high accuracy. Deep neural networks are commonly used in the LPRS to improve the recognition accuracy. However, researchers have found that deep neural networks have their own security problems that may lead to unexpected results. Specifically, they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images, resulting in incorrect license plate recognition. There are some classic methods to generate adversarial examples, but they cannot be adopted on More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Content-Based Filtering Journal Recommender Systems

    Zhaoquan Gu1, Yinyin Cai1, Sheng Wang1, Mohan Li1, *, Jing Qiu1, Shen Su1, Xiaojiang Du1, Zhihong Tian1

    CMC-Computers, Materials & Continua, Vol.64, No.3, pp. 1755-1770, 2020, DOI:10.32604/cmc.2020.010739 - 30 June 2020

    Abstract Recommender systems are very useful for people to explore what they really need. Academic papers are important achievements for researchers and they often have a great deal of choice to submit their papers. In order to improve the efficiency of selecting the most suitable journals for publishing their works, journal recommender systems (JRS) can automatically provide a small number of candidate journals based on key information such as the title and the abstract. However, users or journal owners may attack the system for their own purposes. In this paper, we discuss about the adversarial attacks More >

  • Open Access

    ARTICLE

    A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation

    Deyin Li1, 2, Mingzhi Cheng3, Yu Yang1, 2, *, Min Lei1, 2, Linfeng Shen4

    CMC-Computers, Materials & Continua, Vol.64, No.1, pp. 623-635, 2020, DOI:10.32604/cmc.2020.09800 - 20 May 2020

    Abstract Deep learning networks are widely used in various systems that require classification. However, deep learning networks are vulnerable to adversarial attacks. The study on adversarial attacks plays an important role in defense. Black-box attacks require less knowledge about target models than white-box attacks do, which means black-box attacks are easier to launch and more valuable. However, the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images. This paper proposes a kind of fast black-box attack based on the cross-correlation (FBACC) method. The attack is… More >

Displaying 21-30 on page 3 of 29. Per Page