Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (14)
  • Open Access

    ARTICLE

    Instance Reweighting Adversarial Training Based on Confused Label

    Zhicong Qiu1,2, Xianmin Wang1,*, Huawei Ma1, Songcao Hou1, Jing Li1,2,*, Zuoyong Li2

    Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1243-1256, 2023, DOI:10.32604/iasc.2023.038241

    Abstract Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks, which lies in the fact that examples closer to the decision boundaries are much more vulnerable to being attacked and should be given larger weights. The probability margin (PM) method is a promising approach to continuously and path-independently measuring such closeness between the example and decision boundary. However, the performance of PM is limited due to the fact that PM fails to effectively distinguish the examples having only one misclassified category and the ones with multiple misclassified categories, where the latter is closer to… More >

  • Open Access

    ARTICLE

    Adversarial Examples Protect Your Privacy on Speech Enhancement System

    Mingyu Dong, Diqun Yan*, Rangding Wang

    Computer Systems Science and Engineering, Vol.46, No.1, pp. 1-12, 2023, DOI:10.32604/csse.2023.034568

    Abstract Speech is easily leaked imperceptibly. When people use their phones, the personal voice assistant is constantly listening and waiting to be activated. Private content in speech may be maliciously extracted through automatic speech recognition (ASR) technology by some applications on phone devices. To guarantee that the recognized speech content is accurate, speech enhancement technology is used to denoise the input speech. Speech enhancement technology has developed rapidly along with deep neural networks (DNNs), but adversarial examples can cause DNNs to fail. Considering that the vulnerability of DNN can be used to protect the privacy in speech. In this work, we… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative Adversarial Networks Defense, Deep Denoising… More >

  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the examples are called adversarial examples.… More >

  • Open Access

    ARTICLE

    Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

    Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 2209-2224, 2022, DOI:10.32604/cmc.2022.020111

    Abstract These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely affects the performance or prediction.… More >

  • Open Access

    ARTICLE

    Restoration of Adversarial Examples Using Image Arithmetic Operations

    Kazim Ali*, Adnan N. Quershi

    Intelligent Automation & Soft Computing, Vol.32, No.1, pp. 271-284, 2022, DOI:10.32604/iasc.2022.021296

    Abstract The current development of artificial intelligence is largely based on deep Neural Networks (DNNs). Especially in the computer vision field, DNNs now occur in everything from autonomous vehicles to safety control systems. Convolutional Neural Network (CNN) is based on DNNs mostly used in different computer vision applications, especially for image classification and object detection. The CNN model takes the photos as input and, after training, assigns it a suitable class after setting traceable parameters like weights and biases. CNN is derived from Human Brain's Part Visual Cortex and sometimes performs even better than Haman visual system. However, recent research shows… More >

  • Open Access

    ARTICLE

    A Parametric Study of Arabic Text-Based CAPTCHA Difficulty for Humans

    Suliman A. Alsuhibany*, Hessah Abdulaziz Alhodathi

    Intelligent Automation & Soft Computing, Vol.31, No.1, pp. 523-537, 2022, DOI:10.32604/iasc.2022.019913

    Abstract The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been an interesting topic for several years. An Arabic CAPTCHA has recently been proposed to serve Arab users. Since there have been few scientific studies supporting a systematic design or tuning for users, this paper aims to analyze the Arabic text-based CAPTCHA at the parameter level by conducting an experimental study. Based on the results of this study, we propose an Arabic text-based CAPTCHA scheme with Fast Gradient Sign Method (FGSM) adversarial images. To evaluate the security of the proposed scheme, we ran four filter… More >

  • Open Access

    ARTICLE

    Adversarial Examples Generation Algorithm through DCGAN

    Biying Deng1, Ziyong Ran1, Jixin Chen1, Desheng Zheng1,*, Qiao Yang2, Lulu Tian3

    Intelligent Automation & Soft Computing, Vol.30, No.3, pp. 889-898, 2021, DOI:10.32604/iasc.2021.019727

    Abstract In recent years, due to the popularization of deep learning technology, more and more attention has been paid to the security of deep neural networks. A wide variety of machine learning algorithms can attack neural networks and make its classification and judgement of target samples wrong. However, the previous attack algorithms are based on the calculation of the corresponding model to generate unique adversarial examples, and cannot extract attack features and generate corresponding samples in batches. In this paper, Generative Adversarial Networks (GAN) is used to learn the distribution of adversarial examples generated by FGSM and establish a generation model,… More >

  • Open Access

    ARTICLE

    An Adversarial Network-based Multi-model Black-box Attack

    Bin Lin1, Jixin Chen2, Zhihong Zhang3, Yanlin Lai2, Xinlong Wu2, Lulu Tian4, Wangchi Cheng5,*

    Intelligent Automation & Soft Computing, Vol.30, No.2, pp. 641-649, 2021, DOI:10.32604/iasc.2021.016818

    Abstract Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate… More >

  • Open Access

    ARTICLE

    A Generation Method of Letter-Level Adversarial Samples

    Huixuan Xu1, Chunlai Du1, Yanhui Guo2,*, Zhijian Cui1, Haibo Bai1

    Journal on Artificial Intelligence, Vol.3, No.2, pp. 45-53, 2021, DOI:10.32604/jai.2021.016305

    Abstract In recent years, with the rapid development of natural language processing, the security issues related to it have attracted more and more attention. Character perturbation is a common security problem. It can try to completely modify the input classification judgment of the target program without people’s attention by adding, deleting, or replacing several characters, which can reduce the effectiveness of the classifier. Although the current research has provided various methods of perturbation attacks on characters, the success rate of some methods is still not ideal. This paper mainly studies the sample generation of optimal perturbation characters and proposes a characterlevel… More >

Displaying 1-10 on page 1 of 14. Per Page