Special Issues
Table of Content

AI-Powered Software Engineering

Submission Deadline: 31 January 2026 View: 1634 Submit to Special Issue

Guest Editors

Prof. Shang-Pin Ma

Email: albert@ntou.edu.tw

Affiliation: Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung City, 202, Taiwan

Homepage:

Research Interests: software engineering, service-oriented computing, chatbot architecture

图片1.png


Assoc. Prof. Shin-Jie Lee

Email: jielee@mail.ncku.edu.tw

Affiliation: Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan City, 701, Taiwan

Homepage:

Research Interests: software engineering, AI for software testing, web automation

图片2.png


Prof. Wen-Tin Lee

Email: wtlee@mail.nknu.edu.tw

Affiliation: Department of Software Engineering and Management, National Kaohsiung Normal University, Kaohsiung, 802, Taiwan

Homepage:

Research Interests: software engineering, software testing, microservice architecture, DevOps, artificial intelligence (AI)

图片3.png


Prof. Hsi-Min Chen

Email: hsiminc@fcu.edu.tw

Affiliation: Department of Information Engineering and Computer Science, Feng Chia University, Taichung, 407, Taiwan

Homepage:

Research Interests: software engineering, programming education, service-oriented computing, distributed computing

图片4.png


Prof. Nien-Lin Hsueh

Email: nlhsueh@fcu.edu.tw

Affiliation: Department of Information Engineering and Computer Science, Feng Chia University, Taichung, 407, Taiwan

Homepage:

Research Interests: software engineering, software testing, software framework design, programming education

图片5.png


Summary

Artificial intelligence (AI) is revolutionizing software engineering practices across the development lifecycle. The emergence of large language and foundation models creates unprecedented opportunities for automation, collaboration, and innovation in software creation. This special issue aims to explore cutting-edge research in AI-Powered Software Engineering, emphasizing novel applications that enhance the productivity, quality, and reliability of software systems. We welcome studies that advance the synergistic relationship between software engineers and AI systems, address trustworthiness concerns, and propose novel metrics for evaluating AI efficacy in software engineering contexts.

This special issue focuses on original research papers addressing AI applications in software engineering that are aligned with the scope of the CMC journal. We seek high-quality contributions that demonstrate innovative AI approaches to solve challenges in software design, development, testing, and maintenance. The special issue welcomes interdisciplinary research that bridges AI with software engineering methodologies and practices.

Suggested Themes:
· Requirements and Design
  · AI-assisted software design and model-driven engineering
  · Prompt engineering for SE
· Development and Testing
  · AI-enabled code generation and program repair
  · Test case generation and defect prediction
  ·  Novel efficacy metrics for AI-powered tools
· Operations and Maintenance
 ·  AI for DevOps automation
 ·  AI-assisted software maintenance and evolution
· Cross-cutting Concerns
  · Human-centered and collaborative AI for SE
  · Trustworthy AI systems for software engineering
  · AI for programming education
  ·  Empirical studies of AI tools in practice


Keywords

software engineering, artificial intelligence, large language models, DevOps, software testing, programming education, Human-AI collaboration

Published Papers


  • Open Access

    ARTICLE

    Research on Automated Game QA Reporting Based on Natural Language Captions

    Jun Myeong Kim, Jang Young Jeong, Shin Jin Kang, Beomjoo Seo
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2025.071084
    (This article belongs to the Special Issue: AI-Powered Software Engineering)
    Abstract Game Quality Assurance (QA) currently relies heavily on manual testing, a process that is both costly and time-consuming. Traditional script- and log-based automation tools are limited in their ability to detect unpredictable visual bugs, especially those that are context-dependent or graphical in nature. As a result, many issues go unnoticed during manual QA, which reduces overall game quality, degrades the user experience, and creates inefficiencies throughout the development cycle. This study proposes two approaches to address these challenges. The first leverages a Large Language Model (LLM) to directly analyze gameplay videos, detect visual bugs, and… More >

  • Open Access

    ARTICLE

    Beyond Accuracy: Evaluating and Explaining the Capability Boundaries of Large Language Models in Syntax-Preserving Code Translation

    Yaxin Zhao, Qi Han, Hui Shu, Yan Guang
    CMC-Computers, Materials & Continua, DOI:10.32604/cmc.2025.070511
    (This article belongs to the Special Issue: AI-Powered Software Engineering)
    Abstract Large Language Models (LLMs) are increasingly applied in the field of code translation. However, existing evaluation methodologies suffer from two major limitations: (1) the high overlap between test data and pretraining corpora, which introduces significant bias in performance evaluation; and (2) mainstream metrics focus primarily on surface-level accuracy, failing to uncover the underlying factors that constrain model capabilities. To address these issues, this paper presents TCode (Translation-Oriented Code Evaluation benchmark)—a complexity-controllable, contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework. The dataset is carefully designed to control complexity along multiple dimensions—including syntactic… More >

Share Link