Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (23)
  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the examples are called adversarial examples.… More >

  • Open Access

    ARTICLE

    A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification

    Muhammad Shahid Amin1, Jamal Hussain Shah1, Mussarat Yasmin1, Ghulam Jillani Ansari2, Muhamamd Attique Khan3, Usman Tariq4, Ye Jin Kim5, Byoungchol Chang6,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 4423-4439, 2022, DOI:10.32604/cmc.2022.030432

    Abstract Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot of attention by the machine… More >

  • Open Access

    ARTICLE

    Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems

    Muhammad Shahzad Haroon*, Husnain Mansoor Ali

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 3513-3527, 2022, DOI:10.32604/cmc.2022.029858

    Abstract Intrusion detection system plays an important role in defending networks from security breaches. End-to-end machine learning-based intrusion detection systems are being used to achieve high detection accuracy. However, in case of adversarial attacks, that cause misclassification by introducing imperceptible perturbation on input samples, performance of machine learning-based intrusion detection systems is greatly affected. Though such problems have widely been discussed in image processing domain, very few studies have investigated network intrusion detection systems and proposed corresponding defence. In this paper, we attempt to fill this gap by using adversarial attacks on standard intrusion detection datasets and then using adversarial samples… More >

  • Open Access

    ARTICLE

    An Optimised Defensive Technique to Recognize Adversarial Iris Images Using Curvelet Transform

    K. Meenakshi1,*, G. Maragatham2

    Intelligent Automation & Soft Computing, Vol.35, No.1, pp. 627-643, 2023, DOI:10.32604/iasc.2023.026961

    Abstract Deep Learning is one of the most popular computer science techniques, with applications in natural language processing, image processing, pattern identification, and various other fields. Despite the success of these deep learning algorithms in multiple scenarios, such as spam detection, malware detection, object detection and tracking, face recognition, and automatic driving, these algorithms and their associated training data are rather vulnerable to numerous security threats. These threats ultimately result in significant performance degradation. Moreover, the supervised based learning models are affected by manipulated data known as adversarial examples, which are images with a particular level of noise that is invisible… More >

  • Open Access

    ARTICLE

    VANET Jamming and Adversarial Attack Defense for Autonomous Vehicle Safety

    Haeri Kim1, Jong-Moon Chung1,2,*

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3589-3605, 2022, DOI:10.32604/cmc.2022.023073

    Abstract The development of Vehicular Ad-hoc Network (VANET) technology is helping Intelligent Transportation System (ITS) services to become a reality. Vehicles can use VANETs to communicate safety messages on the road (while driving) and can inform their location and share road condition information in real-time. However, intentional and unintentional (e.g., packet/frame collision) wireless signal jamming can occur, which will degrade the quality of communication over the channel, preventing the reception of safety messages, and thereby posing a safety hazard to the vehicle's passengers. In this paper, VANET jamming detection applying Support Vector Machine (SVM) machine learning technology is used to classify… More >

  • Open Access

    ARTICLE

    Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

    Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 2209-2224, 2022, DOI:10.32604/cmc.2022.020111

    Abstract These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely affects the performance or prediction.… More >

  • Open Access

    ARTICLE

    Restoration of Adversarial Examples Using Image Arithmetic Operations

    Kazim Ali*, Adnan N. Quershi

    Intelligent Automation & Soft Computing, Vol.32, No.1, pp. 271-284, 2022, DOI:10.32604/iasc.2022.021296

    Abstract The current development of artificial intelligence is largely based on deep Neural Networks (DNNs). Especially in the computer vision field, DNNs now occur in everything from autonomous vehicles to safety control systems. Convolutional Neural Network (CNN) is based on DNNs mostly used in different computer vision applications, especially for image classification and object detection. The CNN model takes the photos as input and, after training, assigns it a suitable class after setting traceable parameters like weights and biases. CNN is derived from Human Brain's Part Visual Cortex and sometimes performs even better than Haman visual system. However, recent research shows… More >

  • Open Access

    ARTICLE

    An Adversarial Attack System for Face Recognition

    Yuetian Wang, Chuanjing Zhang, Xuxin Liao, Xingang Wang, Zhaoquan Gu*

    Journal on Artificial Intelligence, Vol.3, No.1, pp. 1-8, 2021, DOI:10.32604/jai.2021.014175

    Abstract Deep neural networks (DNNs) are widely adopted in daily life and the security problems of DNNs have drawn attention from both scientific researchers and industrial engineers. Many related works show that DNNs are vulnerable to adversarial examples that are generated with subtle perturbation to original images in both digital domain and physical domain. As a most common application of DNNs, face recognition systems are likely to cause serious consequences if they are attacked by the adversarial examples. In this paper, we implement an adversarial attack system for face recognition in both digital domain that generates adversarial face images to fool… More >

  • Open Access

    ARTICLE

    Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

    Bader Rasheed1, Adil Khan1, S. M. Ahsan Kazmi2, Rasheed Hussain2, Md. Jalil Piran3,*, Doug Young Suh4

    CMC-Computers, Materials & Continua, Vol.68, No.1, pp. 921-939, 2021, DOI:10.32604/cmc.2021.015452

    Abstract Detecting malicious Uniform Resource Locators (URLs) is crucially important to prevent attackers from committing cybercrimes. Recent researches have investigated the role of machine learning (ML) models to detect malicious URLs. By using ML algorithms, first, the features of URLs are extracted, and then different ML models are trained. The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL. Therefore, deep learning (DL) models are used to solve these issues since they are able to perform featureless detection. Furthermore, DL models give better accuracy and generalization to newly… More >

  • Open Access

    ARTICLE

    A Two-Stage Highly Robust Text Steganalysis Model

    Enlu Li1, Zhangjie Fu1,2,3,*, Siyu Chen1, Junfu Chen1

    Journal of Cyber Security, Vol.2, No.4, pp. 183-190, 2020, DOI:10.32604/jcs.2020.015010

    Abstract With the development of natural language processing, deep learning, and other technologies, text steganography is rapidly developing. However, adversarial attack methods have emerged that gives text steganography the ability to actively spoof steganalysis. If terrorists use the text steganography method to spread terrorist messages, it will greatly disturb social stability. Steganalysis methods, especially those for resisting adversarial attacks, need to be further improved. In this paper, we propose a two-stage highly robust model for text steganalysis. The proposed method analyzes and extracts anomalous features at both intra-sentential and inter-sentential levels. In the first phase, every sentence is first transformed into… More >

Displaying 11-20 on page 2 of 23. Per Page