Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (17)
  • Open Access

    ARTICLE

    Adversarial Examples Generation Algorithm through DCGAN

    Biying Deng1, Ziyong Ran1, Jixin Chen1, Desheng Zheng1,*, Qiao Yang2, Lulu Tian3

    Intelligent Automation & Soft Computing, Vol.30, No.3, pp. 889-898, 2021, DOI:10.32604/iasc.2021.019727 - 20 August 2021

    Abstract In recent years, due to the popularization of deep learning technology, more and more attention has been paid to the security of deep neural networks. A wide variety of machine learning algorithms can attack neural networks and make its classification and judgement of target samples wrong. However, the previous attack algorithms are based on the calculation of the corresponding model to generate unique adversarial examples, and cannot extract attack features and generate corresponding samples in batches. In this paper, Generative Adversarial Networks (GAN) is used to learn the distribution of adversarial examples generated by FGSM More >

  • Open Access

    ARTICLE

    An Adversarial Network-based Multi-model Black-box Attack

    Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 641-649, 2021, DOI:10.32604/iasc.2021.016818 - 11 August 2021

    Abstract Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that More >

  • Open Access

    ARTICLE

    A Generation Method of Letter-Level Adversarial Samples

    Huixuan Xu1, Chunlai Du1, Yanhui Guo2,*, Zhijian Cui1, Haibo Bai1

    Journal on Artificial Intelligence, Vol.3, No.2, pp. 45-53, 2021, DOI:10.32604/jai.2021.016305 - 08 May 2021

    Abstract In recent years, with the rapid development of natural language processing, the security issues related to it have attracted more and more attention. Character perturbation is a common security problem. It can try to completely modify the input classification judgment of the target program without people’s attention by adding, deleting, or replacing several characters, which can reduce the effectiveness of the classifier. Although the current research has provided various methods of perturbation attacks on characters, the success rate of some methods is still not ideal. This paper mainly studies the sample generation of optimal perturbation More >

  • Open Access

    ARTICLE

    Deep Learning Approach for COVID-19 Detection in Computed Tomography Images

    Mohamad Mahmoud Al Rahhal1, Yakoub Bazi2,*, Rami M. Jomaa3, Mansour Zuair2, Naif Al Ajlan2

    CMC-Computers, Materials & Continua, Vol.67, No.2, pp. 2093-2110, 2021, DOI:10.32604/cmc.2021.014956 - 05 February 2021

    Abstract With the rapid spread of the coronavirus disease 2019 (COVID-19) worldwide, the establishment of an accurate and fast process to diagnose the disease is important. The routine real-time reverse transcription-polymerase chain reaction (rRT-PCR) test that is currently used does not provide such high accuracy or speed in the screening process. Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques. In this study, a new convolutional neural network (CNN) framework for COVID-19 detection using computed tomography (CT) images is proposed. The EfficientNet architecture is applied as the backbone… More >

  • Open Access

    REVIEW

    A Survey on Adversarial Example

    Jiawei Zhang*, Jinwei Wang

    Journal of Information Hiding and Privacy Protection, Vol.2, No.1, pp. 47-57, 2020, DOI:10.32604/jihpp.2020.010462 - 15 October 2020

    Abstract In recent years, deep learning has become a hotspot and core method in the field of machine learning. In the field of machine vision, deep learning has excellent performance in feature extraction and feature representation, making it widely used in directions such as self-driving cars and face recognition. Although deep learning can solve large-scale complex problems very well, the latest research shows that the deep learning network model is very vulnerable to the adversarial attack. Add a weak perturbation to the original input will lead to the wrong output of the neural network, but for More >

  • Open Access

    ARTICLE

    A Survey on Adversarial Examples in Deep Learning

    Kai Chen1,*, Haoqi Zhu2, Leiming Yan1, Jinwei Wang1

    Journal on Big Data, Vol.2, No.2, pp. 71-84, 2020, DOI:10.32604/jbd.2020.012294 - 18 September 2020

    Abstract Adversarial examples are hot topics in the field of security in deep learning. The feature, generation methods, attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples. This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples, the occurrences of the adversarial examples, the attacking methods of adversarial examples. This article lists the possible reasons for the adversarial examples. This article also analyzes several typical generation methods of adversarial examples in detail: Limited-memory BFGS (L-BFGS), Fast Gradient Sign Method (FGSM), Basic… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on License Plate Recognition Systems

    Zhaoquan Gu1, Yu Su1, Chenwei Liu1, Yinyu Lyu1, Yunxiang Jian1, Hao Li2, Zhen Cao3, Le Wang1, *

    CMC-Computers, Materials & Continua, Vol.65, No.2, pp. 1437-1452, 2020, DOI:10.32604/cmc.2020.011834 - 20 August 2020

    Abstract The license plate recognition system (LPRS) has been widely adopted in daily life due to its efficiency and high accuracy. Deep neural networks are commonly used in the LPRS to improve the recognition accuracy. However, researchers have found that deep neural networks have their own security problems that may lead to unexpected results. Specifically, they can be easily attacked by the adversarial examples that are generated by adding small perturbations to the original images, resulting in incorrect license plate recognition. There are some classic methods to generate adversarial examples, but they cannot be adopted on More >

Displaying 11-20 on page 2 of 17. Per Page