Open Access
ARTICLE
Super-Resolution Generative Adversarial Network with Pyramid Attention Module for Face Generation
1 Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati, 522503, India
2 Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza, 60455-970, Brazil
3 Department of Information Technology, Siddhartha Academy of Higher Education, Vijayawada, 520007, India
4 Department of Electronics and Communication Engineering, Sreenidhi Institute of Science and Technology, Hyderabad, 501301, India
5 Department of AI, Prince Mohammad bin Fahd University, Alkhobar, 31952, Saudi Arabia
6 Center for Computational Social Science, Hanyang University, Seoul, 01000, Republic of Korea
7 Department of Computer Science, Hanynag University, Seoul, 01000, Republic of Korea
* Corresponding Author: Byoungchol Chang. Email:
Computers, Materials & Continua 2025, 85(1), 2117-2139. https://doi.org/10.32604/cmc.2025.065232
Received 07 March 2025; Accepted 19 May 2025; Issue published 29 August 2025
Abstract
The generation of high-quality, realistic face generation has emerged as a key field of research in computer vision. This paper proposes a robust approach that combines a Super-Resolution Generative Adversarial Network (SRGAN) with a Pyramid Attention Module (PAM) to enhance the quality of deep face generation. The SRGAN framework is designed to improve the resolution of generated images, addressing common challenges such as blurriness and a lack of intricate details. The Pyramid Attention Module further complements the process by focusing on multi-scale feature extraction, enabling the network to capture finer details and complex facial features more effectively. The proposed method was trained and evaluated over 100 epochs on the CelebA dataset, demonstrating consistent improvements in image quality and a marked decrease in generator and discriminator losses, reflecting the model’s capacity to learn and synthesize high-quality images effectively, given adequate computational resources. Experimental outcome demonstrates that the SRGAN model with PAM module has outperformed, yielding an aggregate discriminator loss of 0.055 for real, 0.043 for fake, and a generator loss of 10.58 after training for 100 epochs. The model has yielded an structural similarity index measure of 0.923, that has outperformed the other models that are considered in the current study for analysis.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools