Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (7)
  • Open Access

    ARTICLE

    PP-GAN: Style Transfer from Korean Portraits to ID Photos Using Landmark Extractor with GAN

    Jongwook Si1, Sungyoung Kim2,*

    CMC-Computers, Materials & Continua, Vol.77, No.3, pp. 3119-3138, 2023, DOI:10.32604/cmc.2023.043797

    Abstract The objective of style transfer is to maintain the content of an image while transferring the style of another image. However, conventional methods face challenges in preserving facial features, especially in Korean portraits where elements like the “Gat” (a traditional Korean hat) are prevalent. This paper proposes a deep learning network designed to perform style transfer that includes the “Gat” while preserving the identity of the face. Unlike traditional style transfer techniques, the proposed method aims to preserve the texture, attire, and the “Gat” in the style image by employing image sharpening and face landmark, with the GAN. The color,… More >

  • Open Access

    ARTICLE

    ECGAN: Translate Real World to Cartoon Style Using Enhanced Cartoon Generative Adversarial Network

    Yixin Tang*

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 1195-1212, 2023, DOI:10.32604/cmc.2023.039182

    Abstract Visual illustration transformation from real-world to cartoon images is one of the famous and challenging tasks in computer vision. Image-to-image translation from real-world to cartoon domains poses issues such as a lack of paired training samples, lack of good image translation, low feature extraction from the previous domain images, and lack of high-quality image translation from the traditional generator algorithms. To solve the above-mentioned issues, paired independent model, high-quality dataset, Bayesian-based feature extractor, and an improved generator must be proposed. In this study, we propose a high-quality dataset to reduce the effect of paired training samples on the model’s performance.… More >

  • Open Access

    ARTICLE

    APST-Flow: A Reversible Network-Based Artistic Painting Style Transfer Method

    Meng Wang*, Yixuan Shao, Haipeng Liu

    CMC-Computers, Materials & Continua, Vol.75, No.3, pp. 5229-5254, 2023, DOI:10.32604/cmc.2023.036631

    Abstract In recent years, deep generative models have been successfully applied to perform artistic painting style transfer (APST). The difficulties might lie in the loss of reconstructing spatial details and the inefficiency of model convergence caused by the irreversible en-decoder methodology of the existing models. Aiming to this, this paper proposes a Flow-based architecture with both the en-decoder sharing a reversible network configuration. The proposed APST-Flow can efficiently reduce model uncertainty via a compact analysis-synthesis methodology, thereby the generalization performance and the convergence stability are improved. For the generator, a Flow-based network using Wavelet additive coupling (WAC) layers is implemented to… More >

  • Open Access

    ARTICLE

    Emotional Vietnamese Speech Synthesis Using Style-Transfer Learning

    Thanh X. Le, An T. Le, Quang H. Nguyen*

    Computer Systems Science and Engineering, Vol.44, No.2, pp. 1263-1278, 2023, DOI:10.32604/csse.2023.026234

    Abstract In recent years, speech synthesis systems have allowed for the production of very high-quality voices. Therefore, research in this domain is now turning to the problem of integrating emotions into speech. However, the method of constructing a speech synthesizer for each emotion has some limitations. First, this method often requires an emotional-speech data set with many sentences. Such data sets are very time-intensive and labor-intensive to complete. Second, training each of these models requires computers with large computational capabilities and a lot of effort and time for model tuning. In addition, each model for each emotion failed to take advantage… More >

  • Open Access

    ARTICLE

    Enhancing the Robustness of Visual Object Tracking via Style Transfer

    Abdollah Amirkhani1,*, Amir Hossein Barshooi1, Amir Ebrahimi2

    CMC-Computers, Materials & Continua, Vol.70, No.1, pp. 981-997, 2022, DOI:10.32604/cmc.2022.019001

    Abstract The performance and accuracy of computer vision systems are affected by noise in different forms. Although numerous solutions and algorithms have been presented for dealing with every type of noise, a comprehensive technique that can cover all the diverse noises and mitigate their damaging effects on the performance and precision of various systems is still missing. In this paper, we have focused on the stability and robustness of one computer vision branch (i.e., visual object tracking). We have demonstrated that, without imposing a heavy computational load on a model or changing its algorithms, the drop in the performance and accuracy… More >

  • Open Access

    ARTICLE

    Image-to-Image Style Transfer Based on the Ghost Module

    Yan Jiang1, Xinrui Jia1, Liguo Zhang1,2,*, Ye Yuan1, Lei Chen3, Guisheng Yin1

    CMC-Computers, Materials & Continua, Vol.68, No.3, pp. 4051-4067, 2021, DOI:10.32604/cmc.2021.016481

    Abstract The technology for image-to-image style transfer (a prevalent image processing task) has developed rapidly. The purpose of style transfer is to extract a texture from the source image domain and transfer it to the target image domain using a deep neural network. However, the existing methods typically have a large computational cost. To achieve efficient style transfer, we introduce a novel Ghost module into the GANILLA architecture to produce more feature maps from cheap operations. Then we utilize an attention mechanism to transform images with various styles. We optimize the original generative adversarial network (GAN) by using more efficient calculation… More >

  • Open Access

    ARTICLE

    Data Augmentation Technology Driven By Image Style Transfer in Self-Driving Car Based on End-to-End Learning

    Dongjie Liu1, Jin Zhao1, *, Axin Xi2, Chao Wang1, Xinnian Huang1, Kuncheng Lai1, Chang Liu1

    CMES-Computer Modeling in Engineering & Sciences, Vol.122, No.2, pp. 593-617, 2020, DOI:10.32604/cmes.2020.08641

    Abstract With the advent of deep learning, self-driving schemes based on deep learning are becoming more and more popular. Robust perception-action models should learn from data with different scenarios and real behaviors, while current end-to-end model learning is generally limited to training of massive data, innovation of deep network architecture, and learning in-situ model in a simulation environment. Therefore, we introduce a new image style transfer method into data augmentation, and improve the diversity of limited data by changing the texture, contrast ratio and color of the image, and then it is extended to the scenarios that the model has been… More >

Displaying 1-10 on page 1 of 7. Per Page