Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (95)
  • Open Access

    ARTICLE

    Feature-Based Augmentation in Sarcasm Detection Using Reverse Generative Adversarial Network

    Derwin Suhartono1,*, Alif Tri Handoyo1, Franz Adeta Junior2

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 3637-3657, 2023, DOI:10.32604/cmc.2023.045301 - 26 December 2023

    Abstract Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication. This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network (GAN) based augmentation on diverse datasets, including iSarcasm, SemEval-18, and Ghosh. This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network (RGAN). The proposed RGAN method works by inverting labels between original and synthetic data during the training process. This inversion of labels provides feedback… More >

  • Open Access

    ARTICLE

    PP-GAN: Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN

    Jongwook Si1, Sungyoung Kim2,*

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 3119-3138, 2023, DOI:10.32604/cmc.2023.043797 - 26 December 2023

    Abstract The objective of style transfer is to maintain the content of an image while transferring the style of another image. However, conventional methods face challenges in preserving facial features, especially in Korean portraits where elements like the “Gat” (a traditional Korean hat) are prevalent. This paper proposes a deep learning network designed to perform style transfer that includes the “Gat” while preserving the identity of the face. Unlike traditional style transfer techniques, the proposed method aims to preserve the texture, attire, and the “Gat” in the style image by employing image sharpening and face landmark,… More >

  • Open Access

    ARTICLE

    Automated Video Generation of Moving Digits from Text Using Deep Deconvolutional Generative Adversarial Network

    Anwar Ullah1, Xinguo Yu1,*, Muhammad Numan2

    CMC-Computers, Materials & Continua, Vol.77, No.2, pp. 2359-2383, 2023, DOI:10.32604/cmc.2023.041219 - 29 November 2023

    Abstract Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved, including digit deformation, noise interference between frames, blurred output, and the need for temporal coherence across frames. In this paper, we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network (DD-GAN). The DD-GAN comprises a Deep Deconvolutional Neural Network (DDNN) as a Generator (G) and a modified Deep Convolutional Neural Network (DCNN) as a Discriminator (D) to ensure temporal coherence between adjacent frames. The… More >

  • Open Access

    ARTICLE

    Image to Image Translation Based on Differential Image Pix2Pix Model

    Xi Zhao1, Haizheng Yu1,*, Hong Bian2

    CMC-Computers, Materials & Continua, Vol.77, No.1, pp. 181-198, 2023, DOI:10.32604/cmc.2023.041479 - 31 October 2023

    Abstract In recent years, Pix2Pix, a model within the domain of GANs, has found widespread application in the field of image-to-image translation. However, traditional Pix2Pix models suffer from significant drawbacks in image generation, such as the loss of important information features during the encoding and decoding processes, as well as a lack of constraints during the training process. To address these issues and improve the quality of Pix2Pix-generated images, this paper introduces two key enhancements. Firstly, to reduce information loss during encoding and decoding, we utilize the U-Net++ network as the generator for the Pix2Pix model,… More >

  • Open Access

    ARTICLE

    A Credit Card Fraud Detection Model Based on Multi-Feature Fusion and Generative Adversarial Network

    Yalong Xie1, Aiping Li1,*, Biyin Hu2, Liqun Gao1, Hongkui Tu1

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 2707-2726, 2023, DOI:10.32604/cmc.2023.037039 - 08 October 2023

    Abstract Credit Card Fraud Detection (CCFD) is an essential technology for banking institutions to control fraud risks and safeguard their reputation. Class imbalance and insufficient representation of feature data relating to credit card transactions are two prevalent issues in the current study field of CCFD, which significantly impact classification models’ performance. To address these issues, this research proposes a novel CCFD model based on Multifeature Fusion and Generative Adversarial Networks (MFGAN). The MFGAN model consists of two modules: a multi-feature fusion module for integrating static and dynamic behavior data of cardholders into a unified highdimensional feature… More >

  • Open Access

    ARTICLE

    A Sketch-Based Generation Model for Diverse Ceramic Tile Images Using Generative Adversarial Network

    Jianfeng Lu1,*, Xinyi Liu1, Mengtao Shi1, Chen Cui1,2, Mahmoud Emam1,3

    Intelligent Automation & Soft Computing, Vol.37, No.3, pp. 2865-2882, 2023, DOI:10.32604/iasc.2023.039742 - 11 September 2023

    Abstract Ceramic tiles are one of the most indispensable materials for interior decoration. The ceramic patterns can’t match the design requirements in terms of diversity and interactivity due to their natural textures. In this paper, we propose a sketch-based generation method for generating diverse ceramic tile images based on a hand-drawn sketches using Generative Adversarial Network (GAN). The generated tile images can be tailored to meet the specific needs of the user for the tile textures. The proposed method consists of four steps. Firstly, a dataset of ceramic tile images with diverse distributions is created and… More >

  • Open Access

    ARTICLE

    Integrated Generative Adversarial Network and XGBoost for Anomaly Processing of Massive Data Flow in Dispatch Automation Systems

    Wenlu Ji1, Yingqi Liao1,*, Liudong Zhang2

    Intelligent Automation & Soft Computing, Vol.37, No.3, pp. 2825-2848, 2023, DOI:10.32604/iasc.2023.039618 - 11 September 2023

    Abstract Existing power anomaly detection is mainly based on a pattern matching algorithm. However, this method requires a lot of manual work, is time-consuming, and cannot detect unknown anomalies. Moreover, a large amount of labeled anomaly data is required in machine learning-based anomaly detection. Therefore, this paper proposes the application of a generative adversarial network (GAN) to massive data stream anomaly identification, diagnosis, and prediction in power dispatching automation systems. Firstly, to address the problem of the small amount of anomaly data, a GAN is used to obtain reliable labeled datasets for fault diagnosis model training… More >

  • Open Access

    ARTICLE

    A Novel S-Box Generation Methodology Based on the Optimized GAN Model

    Runlian Zhang1,*, Rui Shu1, Yongzhuang Wei1, Hailong Zhang2, Xiaonian Wu1

    CMC-Computers, Materials & Continua, Vol.76, No.2, pp. 1911-1927, 2023, DOI:10.32604/cmc.2023.041187 - 30 August 2023

    Abstract S-boxes can be the core component of block ciphers, and how to efficiently generate S-boxes with strong cryptographic properties appears to be an important task in the design of block ciphers. In this work, an optimized model based on the generative adversarial network (GAN) is proposed to generate 8-bit S-boxes. The central idea of this optimized model is to use loss function constraints for GAN. More specially, the Advanced Encryption Standard (AES) S-box is used to construct the sample dataset via the affine equivalence property. Then, three models are respectively built and cross-trained to generate… More >

  • Open Access

    ARTICLE

    Single Image Desnow Based on Vision Transformer and Conditional Generative Adversarial Network for Internet of Vehicles

    Bingcai Wei, Di Wang, Zhuang Wang, Liye Zhang*

    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.2, pp. 1975-1988, 2023, DOI:10.32604/cmes.2023.027727 - 26 June 2023

    Abstract With the increasing popularity of artificial intelligence applications, machine learning is also playing an increasingly important role in the Internet of Things (IoT) and the Internet of Vehicles (IoV). As an essential part of the IoV, smart transportation relies heavily on information obtained from images. However, inclement weather, such as snowy weather, negatively impacts the process and can hinder the regular operation of imaging equipment and the acquisition of conventional image information. Not only that, but the snow also makes intelligent transportation systems make the wrong judgment of road conditions and the entire system of… More > Graphic Abstract

    Single Image Desnow Based on Vision Transformer and Conditional Generative Adversarial Network for Internet of Vehicles

  • Open Access

    ARTICLE

    ECGAN: Translate Real World to Cartoon Style Using Enhanced Cartoon Generative Adversarial Network

    Yixin Tang*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 1195-1212, 2023, DOI:10.32604/cmc.2023.039182 - 08 June 2023

    Abstract Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision. Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples, lack of good image translation, low feature extraction from the previous domain images, and lack of high-quality image translation from the traditional generator algorithms. To solve the above-mentioned issues, paired independent model, high-quality dataset, Bayesian-based feature extractor, and an improved generator must be proposed. In this study, we propose a high-quality dataset to reduce the effect of paired training… More >

Displaying 21-30 on page 3 of 95. Per Page